More Backlinks = Worse?
In the last post I pointed out one example of how more links could actually be a bad thing.
While I have been touring Europe with Werty and Radiohead Greg Boser started blogging again. He posted about how a Gokart site started ranking for Amish furniture in Google.
Greg stated:
I don’t think most webmasters truly understand the impact (both negative and positive) pre-existing links can have on a project.
He then stated:
Regardless of who is responsible, the end result is the same. The gokart site gets hosed. Google has determined both domains point to a single site, and that has caused the anchor text of the two separate domains to be combined. Now that really wouldn’t be so bad if you still were able to rank for the phrase combinations from each individual domain. I know if I sold gokarts and mini bikes, I wouldn’t mind the occasional email asking why I show up for amish furniture as long as I ranked well for my core phrases.
But that’s not typically what happens. When you inherit a bunch of off-topic anchor text, more often than not you just end up ranking for a bunch stupid phrases that no one actually searches for.
Greg also did a follow up titled My Naïveté which solidifies his position, and then posted how to protect your domain from competitive sabotage.
Comments
Thanks Aaron, as usual you give us good stuff to chew on, I`ll add it to my .htaccess template, by the way how is Europe?
Hi Aaron:
I'm using BLA to track the backlinks for an industry leader. They have over 6,000 links. Is there any way to systematically use BLA to go through that many links?
Thanks,
Neil
Hi,
I didn't see any contact information listed, so I'm posting here. A bunch of us have tried several times to use the keyword suggestion tool and it doesn't work. It ends up locking up our computers. It would be great if someone could look into this. Thank you!
Yes Aaron, I remeber you mentioning in your ebook that for one client you had done seo services for you decided to stop building links, and it actually helped. So therefor the key is knowing when to stop. Exactly.
I still don't understand the implications of this discussion - Matt Cutts states that this is caused by DNS issues at the server that return the wrong site to a call. This sounds reasonable, ignoring the fact that in all my years of surfing I've never seen the wrong site returned by a host.
Pulling a page doesnt return you an IP result, and the bots don't surf by IP. How could 2 sites get mixed up, other than by host error?
I must have missed an article somewhere that explains this... :(
Add new comment