Anyone with a basic understanding of search engine optimization knows that meta data such as meta descriptions, image alt text and title tags is critical to proper SEO success. But there’s one element that many people forget that can cause SEO campaigns to fail. Not properly implementing a Robots.txt file can make the difference between seeing your search engine rankings soar or sink.

robots.txt

Defining a Robots.txt File

Simply put, a Robots.txt file can help you tell search engines which directories on your site you don’t want them to index. The reasons for this are varied. You might want to make sure to keep sensitive information such as customer banking information on your eCommerce site secure. Or you might have proprietary information posted on certain sections of your website that you want to keep private. Having a Robots.txt file will tell Google, Bing and other legitimate search engines to not index these pages.

 

Robots.txt Keys for Implementation

Disable access to sensitive directories. This can potentially include directories such as: /cgi-bin/, /wp-admin/, /cart/ and /scripts/.

Remove all barriers to main content. This includes making sure that there are no “no follow” tags that will block searches.

Don’t let search engines index “duplicate” pages on your website. This can include sections of your website that are designed for regular viewing and printing, or content that is designed specifically for mobile sites. It’s better to only have them index the main content page in these cases.

 

Things to Avoid

Putting comments on your Robots.txt file

Listing all files in your Robots.txt. This actually makes it easier to find files you want to keep hidden.

Don’t use a /allow tag. This doesn’t exist in the Robots.txt file.