The Overflow Effect and Butterfly SEO

At a certain point in some SERPs, especially highly competitive ones, it becomes really hard to pick a winner. The differences between the top 10, 20 and 30 sites are so miniscule that a search engine could probably apply a random order generator and the searching public would not notice the difference. They are all great sites. Or they are all equally bad, take your pick. The point is, what do you do then?

There are several options. First, they can stop worrying about it, since no matter what they show, as long as the top 50 are all good results, then they don't have to spend the resources worrying about sorting it much.

After all, at a certain point, who cares (aside from the owner of the website in question)?

If I get a great list of sites in the top 10, do I really care as a searcher if the ones in the top 20 are good too? And do I really care what the order they are listed in at that point is? An SEO might care, a website owner would care, but the users would not.

We see this effect with DMOZ, where the editors care a lot less about adding a website to a sub-directory with 100 other good sites on the exact same subject than they are with adding sites to areas with only a couple of good results. In some over-stuffed categories, it would take a miracle (or a completely off the wall site) to get listed, since the number of good sites is so many there is more of a negative attached to the time of the editor involved in checking and listing the site (no matter more small that may be) than there is a positive attached to adding site #389 to a category. The visitors don't usually go past the first 20 or so, anyway.

This effect completely changed the positioning and importance of directories once they reached the "overflow" threshold. Yahoo first, then DMOZ.

Now, with so many websites on the internet, and so many being launched every day, it appears that we are starting to see this "overflow" effect within search engines for some SERPs. I have no doubt that it will continue.

Once you achieve this, then as a software engineer you have to make a decision. "Do I continue to refine this result, spending more and more resources on detecting smaller and smaller differences that the end user doesn't even care about, or do I spend those resources in areas that the user does care about"?

On the other hand, I know a lot of engineers. They usually don't have an "it's good enough" mindset, but rather an "it can be better" mindset, particularly if they are young and ambitious, like Googles engineers. Especially if they have lots of powerful tools to work with.

So what would happen if a search engine decided that there really was a difference in there someplace, and that difference mattered? You would end up seeing either 1) smaller and smaller differences having a larger and larger influence on the SERPS, or 2) a movement towards completely different or new measurement tools that offered the ability to measure things that were not, before.

The reason I wonder about these things is because this creates 2 possible scenarios in an highly competitive SERP (which I tend to be in, lately):

1) "Once the top 30+ are all passing the quality checks, we don't really care what order they are in". The result in this case is likely to be sorted out via the proverbial "butterfly effect", named after the effect where a single flap of a butterflies wings could, in a complicated self referencing system like a weather system (or search engine), cause a hurricane to occur in another part of the world.

This could result in what I'll call "Butterfly SEO", where, once you get to a certain level of optimization, the things that affect your rankings are things that are less and less obvious, and more and more technical. Technicians (and spammers) love this. I know for a fact that in certain SERPs you can see this effect, where something that traditionally isn't an problem, suddenly makes or breaks your rankings.

2) "Since we are having a hard time figuring out which sites are better than others at a certain point, we need to start measuring criteria other than the traditional ones". This is interesting, because instead of needing to get pickier and pickier about links, content, etc, the search engine begins to look at areas that are not normally looked at, and thus more likely to show meaningful and measurable differences in the sites listed. This implies that a more holistic, less regimented approach to SEO would work better than just pushing the same traditional buttons harder and harder, over and over again. Or at least a change in SEO tactics that also addresses the new criteria. I see indications of this type thinking in the aging delay and other similar issues.

These are 2 totally different approaches to SEO for highly competitive results. I have some ideas on the likely actual plan, but I'd be interested in hearing what other people think, first.

My opinion, as usual.


1 comment:

Mike said...

Dude, you seriously need an RSS feed. TRhat is like five posts that have been superb that I have missed!

This one, infact, was Fantastic. It crystalised my thoughts on the issue,a nd is something I would like to expand upon.

Nice, thought provoking work :)