Thursday, May 12, 2005

Cracking the Google code...

Under the Googlescope is the intriguing title of the latest featured article of SEO-News (formerly AllBusinessNews) by Lawrence Deon.

It focuses on some serious, albeit belated crystal ball gazing about the United States Patent Application 20050071741 on March 31, 2005. Many have commented on this Google patent, which is generally regarded as the smoking gun pointing to the Sandbox effect.

Lawrence’s article is a lengthy one that tries to cover all bases but as a result ends up (rather unintentionally perhaps) scaremongering.

In Lawrence’s approach to SEO, whatever you do is basically wrong because Google could (emphasis on the conditional tense here) interpret whatever you do as sp@m.

As regards link building, the article is probably right. Fast acquisition of inbound links may well trigger the Sandbox filter. But the Sandbox period isn’t forever: it’s probably a probation period that recently brought on sites will have to endure.

Rather more worrying are Lawrence’s comments on changing content. I quote from the text:


Unfortunately, this means that Google's sandbox phenomenon and/or the aging delay may apply to your web site if you change too many of your web pages at once.

This conclusion is largely based on a section of the patent itself, so it can't just be dismissed out of hand:


A significant change over time in the set of topics associated with a document may indicate that the document has changed owners and previous document indicators, such as score, anchor text, etc., are no longer reliable.


Similarly, a spike in the number of topics could indicate sp@m. For example, if a particular document is associated with a set of one or more topics over what may be considered a 'stable' period of time and then a (sudden) spike occurs in the number of topics associated with the document, this may be an indication that the document has been taken over as a 'doorway' document.


Another indication may include the sudden disappearance of the original topics associated with the document. If one or more of these situations are detected, then [Google] may reduce the relative score of such documents and/or the links, anchor text, or other data associated the document.


This isn’t a sandbox anymore, words like conundrum or quagmire seem more appropriate to me.

Scenario:



So you’ve a small, well designed, niche site with some inbound links and not a lot of content. You’re getting nowhere fast. You set about writing a number of high quality articles with user friendly information. Or even better, you get a professional copywriter (not necessarily an SEO) to write the stuff, according to your spec. You add the new content to your site, for all and sundry to read and enjoy your mastery of the subject of "blue widgets in New Zealand". You’re already not too bright SERPs now really plummet!

Who would be helped by such a scenario? Certainly not the Site owner, but Google doesn’t care about them, so that’s not the point. Does it enhance the searcher’s experience? If the new content is truly worthwhile rather the opposite: this is content that would benefit those interested in of "blue widgets in New Zealand" but not if it’s ranked on page zilch of the SERPs.

Does it help Google? If it’s as content hungry as it claims to be, then certainly not.

The truth of the matter is that the patent is fantastically vague. If Google wants to be the sharpest tool in the box then it'll have to come up with something less blunt than simple content spikes to detect cloakers (if you read between the lines, that's what it's all about). Many bring on legitimate new content on a very regular basis, even out of necessity (e.g. news sites, newspapers, blogs etc, all of which exist simply by the recentness of their content). For these sites there are no "content spikes", there is only... content.

Lawrence’s advice is basically: do nothing but if you’re going to do anything at all, do it really slowly!


The logical consequence is that I shouldn’t have written this post at all [something Lawrence Deon probably really agrees with]. Yet almost all posts of this really young blog have been indexed by Google and are already attracting some traffic.

Google would do better to concentrate on it's semantic understanding of Web pages: despite all the filtering hype, Google's search results often are incredibly irrelevant. My logs for this blog showed that this post recently was found in Google under "chuckle brothers sayings", a term that partly does appear once, near the end of the page! Look for yourself, the post is about as relevant for that term as it is for, say, "cows eat saffron". A human editor wouldn't in a million years have concluded that the document had anything to do with, or say anything about "chuckle brothers sayings".

Final word. I’m no great lover of conspiracy theories but it’s hard to escape the feeling that Google et al. are so hopelessly complicating matters regarding the main search results, that search marketers are driven in droves to paid inclusion programs… It's certainly a fact that when it comes to developing new SEO technology, there's more going on in terms of engineering successfull PPC campaigns, creating highly converting landing pages etc. Perhaps natural search results will become a thing of the past, sooner rather than later.


0 Comments:

Post a Comment

<< Home