Helpful Robots Txt Tip
When creating a robots.txt file, if you specify something for all bots (using a *) and then later specify something for a specific bot (like Googlebot) then search engines tend to ignore the broad rules and follow the ones you defined specifically for them. From Matt Cutts:
If there's a weak specification and a specific specification for Googlebot, we'll go with the one for Googlebot. If you include specific directions for Googlebot and also want Googlebot to obey the "generic" directives, you'd need to include allows/disallows from the generic section in the Googlebot section.
I believe most/all search engines interpret robots.txt this way--a more specific directive takes precedence over a weaker one.
Comments
Thanks for the tip Aaron. It reinforces what I have been learning this week about robots.txt while working on preventing dup content on a wordpress blog.
One interesting thing I learned about giving specific commands to certain bots, was for adsense publishers. If you deny all bots access to, for instance, the archives section, then you should write a specific directive for the adsense bot to allow it. That way you don't end up with untargeted ads on those pages.
Great tip David. I didn't think about adsense as it's own bot.
Awesome tip Aaron. I was formatting my robots.txt all wrong.
Robots.txt validators that I've used indicate that the rules for specific robots should come first and the wildcard rules should go last. I'm not sure if it's in the specification, but I try to do it that way. Some robots may just look for the first rule that matches -- so I don't want them seeing the wildcard first.
Nice work Aaron, this will be a great help to people not usually used to using robots.txt.
Add new comment