Poke The Box Review

I received this book in the mail.

It's nice to be sent books. And it's by Seth!

The book is called Poke The Box. It's about making a start. Seth encourages us to just jump in and do things. It doesn't matter if they go wrong, the important thing is to make the start. To break out of conservative patterns. It's a scatter-shot rant about the death of the industrial revolution, with Godin inciting us, over and over again, to take action.

Gotta say, I was a little disappointed by the book. It skates over the surface, didn't really hang together, and recycles some pretty tired themes. This review amused me.

Or maybe this book is the start of something else Seth has in mind. I don't know. Having said that, I think the central point of the book is valuable, and that is to.....

Start Something

Do you ever regret not buying a particular domain name? Or a particular site? Do you regret not having started a site in that niche that is now taking off? Do you ever feel you've missed the boat on affiliate marketing? Do you regret not going harder at SEO in the days when it was just that much easier?

I think a lot of us can relate. There are always regrets and missed opportunities.

We *could* have done some of these things. But, for whatever reason, we didn't. And we probably still find reasons not to make a start on things today. Chances are, we're going to regret not having started them when we look back five years from now, too.

Take Seth's advice, and just make the start on that thing you are thinking of doing.

Fail At Something

Often we don't start something because we're scared of failing. However, as we know, failure is a part of life. The old cliche about the only way never to fail is to never try anything - rings true.

In SEO, one thing that might be good to start, if you're not doing so already, is some simple testing. Buy a few cheap domain names, add a little content, and try to get the site ranking for some obscure keyword term. As you don't really care about the keyword term, you can remain focused on pure SEO. If it fails to work, it doesn't matter. In fact, that tells you something about whatever technique you were using.Throw a few links at it. What happens? Does this fail to produce rankings? At least you know who not to get links from in future!

This is something I've let slip lately, so I'm going to make a new start on it, too.

Do Something Worth Doing

Seth mentions Tom Peters, who wrote "In Search Of Excellence". Seth sees that Peters is frustrated, because people are hearing his message, without embracing the thinking behind it. Being excellent isn't about doing what working extra hard at doing what you're told, it's about making the leap and doing work you decide is worth doing.

Sometimes, the thing that enables us to keep going with a site is simply that we believe in it. Nobody else might be paying attention. The rankings are mediocre. No one is linking to it. But if we feel what we're doing is worthwhile, we're more likely to work through the rough patches when there is no other reward on offer. If we don't really believe in a project, it's hard to find the will to work through the inevitable challenges.

Summary

Well, I guess should just say "Go!" :)

Why not - today - start something new.

Download IE9

If Microsoft used their primary product to bundle other free products they were giving away to gain market leverage Google would hoot and/or holler. Google demanded that Chrome be shown as an option in Europe when Microsoft was required to market their competitors via BrowserChoice.eu.

Yet if you visit YouTube with an old browser you can see that Google claims it isn't an advertisement, yet somehow Internet Explorer didn't make the short list.

IE9 launched as a solid product with great reviews and enhanced privacy features.

A new version of Microsoft Corp.'s Internet Explorer to be released Tuesday will be the first major Web browser to include a do-not-track tool that helps people keep their online habits from being monitored.

Microsoft's decision to include the tool in Internet Explorer 9 means Google Inc. and Apple Inc. are the only big providers of browsers that haven't yet declared their support for a do-no-track system in their products.

I have long been a fan of using multiple web browsers for different tasks. Perhaps the single best reason to use IE9 is that a large segment of your customer base will be using it. Check out how search is integrated into the browser and use it as a keyword research tool.

The second best reason to use it is that sending some usage data to Microsoft will allow them to improve their search relevancy to better compete with Google. As a publisher I don't care who wins in search, so much as I want the marketshare to be split more evenly, such that if Panda II comes through there is less risk to webmasters. Stable ecosystems allow aggressive investment in growth, whereas unstable ones retard it.

Speaking of Google, Michael Gray recently wrote: "They are the virtual drug dealers of the 21st century, selling ads wrapped around other people’s content, creating information polluted ghettos, and they will become the advertising equivalent of a drug lord poised to rule the web."

The problem with Google's ecosystem was not only that it was running fast and loose (hence the need for the content farm update, a problem Google created, and a solution which had major collateral damage along with some unintended consequences, while missing the folks who were public enemy #1).

Beyond that, Google recently announced the ability for you to report counterfeit products advertised in AdWords. Their profit margins are pretty fat. Why did the problem go ignored so long? Why does the solution require you to work for Google for free?

In the following video, Matt winces, as though he might have an issue with what he is saying. "We take our advertising business very seriously as well. Both our commitment to delivering the best possible audience for advertisers, and to only show ads that you really want to see." - Matt Cutts

How does this relate to Internet Explorer 9? Well let's look at what sort of ads Google is running:

I am not sure if that is legal. But even if it is, it is low brow & sleazier than Google tries to portray their brand as being.

If Microsoft did the same thing you know Google would cry. Ultimately I think Google's downfall will be them giving Microsoft carte blanche to duplicate their efforts. Microsoft has deep pockets, fat margins, and is rapidly buying search marketshare. If Microsoft can use their browser as a storefront (like Google does) they have much greater marketshare than Chrome has.

Cory Doctorow's excellent essay "Beware the spyware model of technology – its flaws are built in" is a great read & warns where the above approach leads.

Is the Huffington Post Google's Favorite Content Farm?

I was looking for information about the nuclear reactor issue in Japan and am glad it did not turn out as bad as it first looked!

But in that process of searching for information I kept stumbling into garbage hollow websites. I was cautious not to click on the malware results, but of the mainstream sites covering the issue, one of the most flagrant efforts was from the Huffington Post.

AOL recently announced that they were firing 15% to 20% of their staff. No need for original stories or even staff writers when you can literally grab a third party tweet, wrap it in your site design, and rank it in Google. Inline with that spirit, I took a screenshot. Rather than calling it the Huffington Post I decided a more fitting title would be plundering host. :D

plundering host.

We were told that the content farm update was to get rid of low quality web pages & yet that information-less page was ranking at the top of their search results, when it was nothing but a 3rd party tweet wrapped in brand and ads.

How does Huffington Post get away with that?

You can imagine in a hyperspace a bunch of points, some points are red, some points are green, and in others there’s some mixture. Your job is to find a plane which says that most things on this side of the place are red, and most of the things on that side of the plane are the opposite of red. - Google's Amit Singhal

If you make it past Google's arbitrary line in the sand there is no limit to how much spamming and jamming you can do.

we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side. - Matt Cutts

(G)arbitrage never really goes away, it just becomes more corporate.

The problem with Google arbitrarily picking winners and losers is the winners will mass produce doorway pages. With much of the competition (including many of the original content creators) removed from the search results, this sort of activity is simply printing money.

As bad as that sounds, it is actually even worse than that. Today Google Alerts showed our brand being mentioned on a group-piracy website built around a subscription model of selling 3rd party content without permission! As annoying as that feels, of course there are going to be some dirtbags on the way that you have to deal with from time to time. But now that the content farm update has went through, some of the original content producers are no longer ranking for their own titles, whereas piracy sites that stole their content are now the canonical top ranked sources!

Google never used to put piracy sites on the first page of results for my books, this is a new feature on their part, and I think it goes a long way to show that their problem is cultural rather than technical. Google seems to have reached the conclusion that since many of their users are looking for pirated eBooks, quality search results means providing them with the best directory of copyright infringements available. And since Google streamlined their DMCA process with online forms, I couldn’t discover a method of telling them to remove a result like this from their search results, though I tried anyway.
... I feel like the guy who was walking across the street when Google dropped a 1000 pound bomb to take out a cockroach - Morris Rosenthal

Way to go Google! +1 +1

Too clever by half.

Google's Matt Cutts Talks Down Keyword Domain Names

I have long documented Google's preference toward brands, while Google has always stated that they don't really think of brand.

While not thinking of brands, someone on the Google UI team later added navigational aids to the search results promoting popular brands - highlighting the list of brands with the label "brands" before the list of links.

Take a look at what Matt Cutts shares in the following video, where he tries to compare brand domain names vs keyword domain names. He highlights brand over and over again, and then when he talks about exact match domains getting a bonus or benefit, he highlights that Google may well dial that down soon.

Now if you are still on the fence, let me just give you a bit of color. that we have looked at the rankings and the weights that we give to keyword domains, & some people have complained that we are giving a little too much weight for keywords in domains. So we have been thinking about at adjusting that mix a bit and sort of turning the knob down within the algorithm, so that given 2 different domains it wouldn't necessarily help you as much to have a domain name with a bunch of keywords in it. - Matt Cutts

For years the Google algorithm moved in one direction, and that was placing increased emphasis on brand and domain authority. That created the content farm problem, but with the content farm update they figured out how to dial down a lot of junk hollow authority sites. They were able to replace "on-topic-ness" with "good-ness," according to the search quality engineer who goes by the nickname moultano. As part of that content farm update, they dialed up brands to the point where now doorway pages are ranking well (so long as they are hosted on brand websites).

Google keeps creating more signals from social media and how people interact with the search results. A lot of those types of signals are going to end up favoring established brands which have large labor forces & offline marketing + distribution channels. Google owns about 97% of the mobile search market, so more and more of that signal will eventually end up bleeding into the online world.

In addition to learning from the firehose of mobile search data, Google is also talking about selling hotel ads on a price per booking. Google can get a taste of any transaction simply by offering free traffic in exchange for giving them the data needed to make a marketplace & then requiring access to the best deals & discounts:

It is believed that Google requires participating hotels to provide Google Maps with the lowest publicly available rates, for stays of one to seven nights, double occupancy, with arrival days up to 90 days ahead.

In a world where Google has business volume data, clientele demographics, pricing data, and customer satisfaction data for most offline businesses they don't really need to place too much weight on links or domain names. Businesses can be seen as being great simply by being great.*

(*and encouraging people to stuff the ballot box for them with discounts :D)

Classical SEO signals (on-page optimization, link anchor text, domain names, etc.) have value up until a point, but if Google is going to keep mixing in more and more signals from other data sources then the value of any single signal drops. I haven't bought any great domain names in a while, and with Google's continued brand push and Google coming over the top with more ad units (in markets like credit cards and mortgage) I am seeing more and more reason to think harder about brand. It seems that is where Google is headed. The link graph is rotted out by nepotism & paid links. Domain names are seen as a tool for speculation & a short cut. It is not surprising Google is looking for more signals.

How have you adjusted your strategies of late? What happens to the value of domain names if EMD bonus goes away & Google keeps adding other data sources?

A Thought Experiment on Google Whitelisting Websites

Google has long maintained that "the algorithm" is what controls rankings, except for sites which are manually demoted for spamming, getting hacked, delivering spyware, and so on.

At the SMX conference it was revealed that Google uses white listing:

Google and Bing admitted publicly to having ‘exception lists’ for sites that were hit by algorithms that should not have been hit. Matt Cutts explained that there is no global whitelist but for some algorithms that have a negative impact on a site in Google’s search results, Google may make an exception for individual sites.

The idea that "sites rank where they deserve, with the exception of spammers" has long been pushed to help indemnify Google from potential anti-competitive behavior. Google's marketing has further leveraged the phrase "unique democratic nature of the web" to highlight how PageRank originally worked.

But why don't we conduct a thought experiment for the purpose of thinking through the differences between how Google behaves and how Google doesn't want to be perceived as behaving.

Let's cover the negative view first. The negative view is that either Google has a competing product or a Google engineer dislikes you and goes out of his way to torch your stuff simply because you are you and he dislikes you & is holding onto a grudge. Given Google's current monopoly-level marketshare in most countries, such would be seen as unacceptable if Google was just picking winners and losers based on their business interests.

The positive view is that "the algorithm handles almost everything, except some edge cases of spam." Let's break down that positive view a bit.

  • Off the start, consider that Google engineers write the algorithms with set goals and objectives in mind.
    • Google only launched universal search after Google bought Youtube. Coincidence? Not likely. If Google had rolled out universal search before buying Youtube then they likely would have increased the price of Youtube by 30% to 50%.
    • Likewise, Google trains some of their algorithms with human raters. Google seeds certain questions & desired goals in the minds of raters & then uses their input to help craft an algorithm that matches their goals. (This is like me telling you I can't say the number 3, but I can ask you to add 1 and 2 then repeat whatever you say :D)
  • At some point Google rolls out a brand-filter (or other arbitrary algorithm) which allows certain favored sites to rank based on criteria that other sites simply can not match. It allows some sites to rank with junk doorway pages while demoting other websites.
  • To try to compete with that, some sites are forced to either live in obscurity & consistently shed marketshare in their market, or be aggressive and operate outside the guidelines (at least in spirit, if not in a technical basis).
  • If the site operates outside the guidelines there is potential that they can go unpenalized, get a short-term slap on the wrist, or get a long-term hand issued penalty that can literally last for up to 3 years!
  • Now here is where it gets interesting...
    • Google can roll out an automated algorithm that is overly punitive and has a significant number of false positives.
    • Then Google can follow up by allowing nepotistic businesses & those that fit certain criteria to quickly rank again via whitelisting.
    • Sites which might be doing the same things as the whitelisted sites might be crushed for doing the exact same thing & upon review get a cold shoulder.

You can see that even though it is claimed "TheAlgorithm" handles almost everything, they can easily interject their personal biases to decide who ranks and who does not. "TheAlgorithm" is first and foremost a legal shield. Beyond that it is a marketing tool. Relevancy is likely third in line in terms of importance (how else could one explain the content farm issue getting so out of hand for so many years before Google did something about it).

Doorway Pages Ranking in Google in 2011?

When Google did the Panda update they highlighted that not only did some "low quality" sites get hammered, but that some "high quality" sites got a boost. Matt Cutts said: "we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side."

Here is the problem with that sort of classification system: doorway pages.

The following Ikea page was ranking page 1 in the search results for a fairly competitive keyword.

Once you strip away the site's navigation there are literally only 20 words on that page. And the main body area "content" for that page is a link to a bizarre, confusing, and poor-functioning flash tour which takes a while to load.

If you were trying to design the worst possible user experience & wanted to push the "minimum viable product" page into the search results then you really couldn't possibly do much worse that that Ikea page is (at least not without delivering malware and such).

I am not accusing Ikea of doing anything spammy. They just have terrible usability on that page. Their backlinks to that page are few in number & look just about as organic as they could possibly come. But not that long ago companies like JC Penny and Overstock were demoted by Google for building targeted deep links (that they needed in order to rank, but were allegedly harming search relevancy & Google user experience). Less than a month later Google arbitrarily changed their algorithm to where other branded sites simply didn't need many (or in some cases any) deep links to get in the game, even if their pages were pure crap. Google Handling Flash.

We are told the recent "content farm" update was to demote low quality content. If that is the case, then how does a skeleton of a page like that rank so high? How did that Ikea page go from ranking on the third page of Google's results to the first one? I think Google's classifier is flashing a new set of exploits for those who know what to look for.

A basic tip? If you see Google ranking an information-less page like that on a site you own, that might be a green light to see how far you can run with it. Give GoogleBot the "quality content" it seeks. Opportunity abound!

Quick & Dirty Competitive Research for Keywords

There are so many competitive research tools on the market. We reviewed some of the larger ones here but there are quite a few more on the market today.

The truth is that you can really get a lot of good, usable data to give you an idea of what the competition is likely to be by using free tools or the free version of paid tools.

Some of the competitive research tools out there (the paid ones) really are useful if you are going to scale way up with some of your SEO or PPC plans but many of the paid versions are overkill for a lot of webmasters.

Choosing Your Tools

Most tools come with the promises of “UNCOVERING YOUR COMPETITORS BEST _____".

That blank can be links, keywords, traffic sources, and so on. As we know, most competitive research tools are rough estimates at best and almost useless estimates at worst. Unless you get your hands on your competition’s analytics reports, you are still kind of best-guessing. In this example we are looking for the competitiveness of a core keyword.

Best-guessing really isn’t a bad thing so long as you realize that what you are doing is really triangulating data points and looking for patterns across different tools. Keep in mind many tools use Google’s data so you’ll want to try to reach beyond Google’s data points a bit and hit up places like:

The lure of competitive research is to get it done quickly and accurately. However, gauging the competition of a keyword or market can’t really be done with a push of the button as there are factors that come into play which a push-button tool cannot account for, such as:

  • how hard is the market to link build for?
  • is the vertical dominated by brands and thick EMD’s?
  • what is your available capital?
  • are the ranking sites knowledgeable about SEO or are they mostly ranking on brand authority/domain authority? (how tight is their site structure, how targeted is their content, etc)
  • is Google giving the competing sites a brand boost?
  • is Google integrating products, images, videos, local results, etc?

Other questions might be stuff like "how is Google Instant skewing this keyword marketplace" or "is Google firing a vertical search engine for these results (like local" or "is Google placing 3 AdWords ads at the top of the search results" or "is Google making inroads into the market" like they are with mortgage rates.

People don't search in an abstract mathematical world, but by using their fingers and eyes. Looking at the search results matters. Quite a bit of variables come into play which require some human intuition and common sense. A research tool is only as good as the person using it, you have to know what you are looking at & what to be aware of.

Getting the Job Done

In this example I decided to use the following tools:

Yep, just 2 free tools.... :)

So we are stipulating that you’ve already selected a keyword. In this case I picked a generic keyword for the purposes of going through how to use the tools. Plug your keyword into Google, flip on SEO for Firefox and off you go!

This is actually a good example of where a push button tool might bite the dust. You’ve got Related Search breadcrumbs at the top, Images in the #1 spot, Shopping in the #3 spot, and News (not pictured) in the #5 spot.

So wherever you thought you might rank, just move yourself down a 1-3 spots depending on where you would be in the SERPS. This can have a large effect on potential traffic and revenue so you’ll want to evaluate the SERP prior to jumping in.

You might decide that you need to shoot for 1 or 2 rather than top 3 or top 5 given all the other stuff Google is integrating into this results page. Or you might decide that the top spot is locked up and the #2 position is your only opportunity, making the risk to reward ratio much less appealing.

With SEO for Firefox you can quickly see important metrics like:

  • Yahoo! links to domain/page
  • domain age
  • Open Site Explorer and Majestic SEO link data
  • presence in strong directories
  • potential, estimated traffic value from SEM Rush

Close up of SEO for Firefox data:

Basically by looking at the results page you can see what other pieces of universal search you’ll be competing with, whether the home page or a sub-page is ranking, and whether you are competing with brands and/or strong EMD’s.

With SEO for Firefox you’ll see all of the above plus the domain age, domain links, page links, listings in major directories, position in other search engines, and so on. This will give you a good idea of potential competitiveness of this keyword for free and in about 5 seconds.

It is typically better & easier to measure the few smaller sites that managed to rank rather than measuring the larger authoritative domains. Why? Well...

Checking Links

So now that you know how many links are pointing to that domain/page you’ll want to check how many unique domains are pointing in and what the anchor text looks like, in addition to what the quality of those links might be.

Due to its ease of use (in addition to the data being good) I like to use Open Site Explorer from SeoMoz in these cases of quick research. I will use their free service for this example, which requires no log in, and they are even more generous with data when you register for a free account.

The first thing I do is head over to the anchor text distribution of the site or page to see if the site/page is attracting links specific to the keyword I am researching:

What’s great here is you can see the top 5 instances of anchor text usage, how many total links are using that term, and how many unique domains are supplying those total links.

You can also see data relative to the potential quality of the entire link profile in addition to the ratio of total/unique domains linking in.

You probably won’t want or need to do this for every single keyword you decide to pursue. However, when looking at a new market, a potential core keyword, or if you are considering buying an exact match domain for a specific keyword you can accomplish a really good amount of competitive research on that keyword by using a couple free tools.

Types of Competitive Research

Competitive research is a broad term and can go in a bunch of different directions. As an example, when first entering a market you would likely start with some keyword research and move into analyzing the competition of those keywords before you decide to enter or fully enter the market.

As you move into bigger markets and start to do more enterprise-level competitive research specific to a domain, link profiles, or a broader market you might move into some paid tools.

Analysis paralysis is a major issue in SEO. Many times you might find that those enterprise-level tools really are overkill for what you might be trying to do initially. Gauging the competitiveness of a huge keyword or a lower volume keyword really doesn’t change based on the money you throw at a tool. The data is the data especially when you narrow down the research to a keyword, keywords, or domains.

Get the Data, Make a Decision

So with the tools we used here you are getting many of the key data points you need to decide whether pursuing the keyword or keywords you have chosen is right for you.

Some things the tools cannot tell you are questions we talked about before:

  • how much captial can you allocate to the project?
  • how hard are you willing to work?
  • do you have a network of contacts you can lean on for advice and assistance?
  • do you have enough patience to see the project through, especially if ranking will take a bit..can you wait on the revenue?
  • is creativity lacking in the market and can you fill that void or at least be better than what’s out there?

Only you can answer those questions :)

The Problem With Following Prescription

You can't learn great SEO from an e-book. Or buying software tools.

Great SEO is built on an understanding.

Reducing SEO To Prescription

One of the problems with reductive, prescribed SEO approaches - i.e. step one: research keywords, step two: put keyword in title etc can be seen in the recent "Content Farm" update.

When Google decide sites are affecting their search quality, they look for a definable, repeated footprint made by the sites they deem to be undesirable. They then design algorithms that flag and punish the sites that use such a footprint.

This is why a lot of legitimate sites get taken out in updates. A collection of sites may not look, to a human, like problem sites, but the algo sees them as being the same thing, because their technical footprint is the same. For instance, a website with a high number of 250-word pages is an example of a footprint. Not necessarily an undesirable one, but a footprint nevertheless. Similar footprints exist amongst ecommerce sites heavy in sitewide templating but light on content unique to the page.

Copying successful sites is a great way to learn, but can also be a trap. If you share a similar footprint, having followed the same SEO prescription, you may go down with them if Google decides their approach is no longer flavor of the month.

The Myth Of White Hat

A lot of sites that get taken out are white hat i.e. sites that follow Google's webmaster guidelines.

It's a reasonably safe approach, but if you understand SEO, you'll soon realize that following a white hat prescription offers no guarantees of ranking, nor does it offer any guarantees you won't be taken out.

The primary reason there aren't any guarantees comes down to numbers. Google knows that when it makes a change, many sites will lose. They also know that many sites will win i.e. replace the sites that lost. If your site drops out, Google aren't bothered. There will be plenty of other sites to take your place. Google are only concerned that their users perceive the search results to be of sufficient quality.

The exception is if your site really is a one-of-a-kind. The kind of site that would embarrass Google if users couldn't find it. BMW, for example, in response to the query "BMW".

It's not fair, but we understand that's just how life is.

An Understanding

For those readers new to SEO, in order to really grasp SEO, you need to see things from the search engines point of view.

Firstly, understand the search engines business case. The search engine can only make money if advertisers pay for search traffic. If it were too easy for those sites who are likely to use PPC to rank highly in the natural results, then the search engines business model is undermined. Therefore, it is in the search engines interest to "encourage" purely commercial entities to use PPC, not SEO. One way they do this is to make the natural results volatile and unpredictable. There are exceptions, covered in my second point.

Secondly, search engines must provide sufficient information quality to their users. This is an SEO opportunity, because without webmasters producing free-to-crawl, quality content, there can be no search engine business model. The search engines must nurture this ecosystem.

If you provide genuine utility to end users, the search engines have a vested interest in your survival, perhaps not as an individual, but certainly as a group i.e. "quality web publishers". Traffic is the lifeblood of the web, and if quality web publishers aren't fed traffic, they die. The problem, for webmasters, is that the search engines don't care about any one "quality publisher", as there are plenty of quality publishers. The exception is if you're the type of quality publisher who has a well recognized brand, and would therefore give the impression to users that Google was useless if you didn't appear.

Thirdly, for all their cryptic black box genius, search engines aren't all that sophisticated. Yes, the people who run them are brilliant. The problems they solve are very difficult. They have built what, only decades ago, would have been considered magic. But, at the end of the day, it's just a bit of maths trying to figure out a set of signals. If you can work out what that set of signals are, the maths will - unblinkingly - reward you. It is often said that in the search engine wars, the black hats will be the last SEOs standing.

Fourthly, the search engines don't really like you. They identified you as a business risk in their statement to investors. You can, potentially, make them look bad. You can undermine their business case. You may compete with their own channels for traffic. They tolerate you because they need publishers making their stuff easy to crawl, and not locking their content away behind paywalls. Just don't expect a Christmas card.

SEO Strategy Built On Understanding

Develop strategies based on how a search engine sees the world.

For example, if you're a known brand, your approach will be different to a little known, generic publisher. There isn't really much risk you won't appear, as you could embarrass Google if users can't find you. This is the reason BMW were reinstated so quickly after falling foul of Google's guidelines, but the same doesn't necessarily apply to lesser known publishers.

If you like puzzles, then testing the algorithms can give you an unfair advantage. It's a lot harder than it used to be, but where there is difficulty, there is a barrier to entry to those who come later. Avoid listening to SEO echo chambers where advice may be well-meaning, but isn't based on rigorous testing.

If you're a publisher, not much into SEO wizardry, and you create content that is very similar to content created by others, you should focus on differentiation. If there are 100's of publishers just like you, then Google doesn't care if you disappear. Google do need to find a way to reward quality, especially in niches that aren't well covered. Be better than the rest, but if you're not, slice your niche finer and finer, until you're the top dog in your niche. You should focus on building brand, so you can own a search stream. For example, this site owns the search stream "SEO Book", a stream Aaron created and built up.

Remember, search engines don't care about you, unless there's something in it for them.

Google Update Panda

Google tries to wrestle back index update naming from the pundits, naming the update "Panda". Named after one of their engineers, apparently.

The official Google line - and I'm paraphrasing here - is this:

Trust us. We're putting the bad guys on one side, and the good guys on the other

I like how Wired didn't let them off the hook.

Wired persisted:

Wired.com: Some people say you should be transparent, to prove that you aren’t making those algorithms to help your advertisers, something I know that you will deny.

Singhal: I can say categorically that money does not impact our decisions.

Wired.com: But people want the proof.

This answer, from Matt Cutts, was interesting:

Cutts: If someone has a specific question about, for example, why a site dropped, I think it’s fair and justifiable and defensible to tell them why that site dropped. But for example, our most recent algorithm does contain signals that can be gamed. If that one were 100 percent transparent, the bad guys would know how to optimize their way back into the rankings

Why Not Just Tell Us What You Want, Already!

Blekko makes a big deal about being transparent and open, but Google have always been secretive. After all, if Google want us to produce quality documents their users like and trust, then why not just tell us exactly what a quality document their users like and trust looks like?

Trouble is, Google's algorithmns clearly aren't that bulletproof, as Google admit they can still be gamed, hence the secrecy. Matt says he would like to think there would be a time they could open source the algorithms, but it's clear that time isn't now.

Do We Know Anything New?

So, what are we to conclude?

  • Google can be gamed. We kinda knew that....
  • Google still aren't telling us much. No change there....

Then again, there's this:

Google have filed a patent that sounds very similar to what Demand Media does i.e looks for serp areas that are under-served by content, and prompts writers to write for it.

The patent basically covers a system for identifying search queries which have low quality content and then asking either publishers or the people searching for that topic to create some better content themselves. The system takes into account the volume of searches when looking at the quality of the content so for bigger keywords the content would need to be better in order for Google to not need to suggest somebody else writes something

If Google do implement technology based on this patent, then it would appear they aren't down on the "Content Farm" model. They may even integrate it themselves.

Until then....

How To Avoid Getting Labelled A Content Farmer

The question remains: how do you prevent being labelled as a low-quality publisher, especially when sites like eHow remain untouched, yet Cult Of Mac gets taken out? Note: Cult Of Mac appears to have been reinstated, but one wonders if that was the result of the media attention, or an algo tweak.

Google want content their users find useful. As always, they're cagey about what "useful" means, so those who want to publish content, and want to rank well, but do not want be confused with a content farm, are left to guess. And do a little reverse-engineering.

Here's a stab, based on our investigations, the conference scene, Google's rhetoric, and pure conjecture thus far:

  • A useful document will pass a human inspection
  • A useful document is not ad heavy
  • A useful document is well linked externally
  • A useful document is not a copy of another document
  • A useful document is typically created by a brand or an entity which has a distribution channel outside of the search channel
  • A useful document does not have a 100% bounce rate followed by a click on a different search result for that same search query ;)

Kinda obvious. Are we off-base here? Something else? What is the difference, as far as algo is concerned, between e-How and Suite 101? Usage patterns?

Still doesn't explain YouTube, though, which brings us back to:

Wired.com: But people want the proof

YouTube, the domain, is incredibly useful, but some pages - not so much. Did YouTube get hammered by update Panda, too?

Many would say that's unlikely.

I guess "who you know" helps.

In the Panda update some websites got owned. Others are owned and operated by Google. :D

Tracking Offline Conversions for Local SEO

Pages