Exploiting Googles Proxy Weakness

I have a client. He does negotiation training and has clients like IBM. His website is http://www.negotiationdynamics.com/. He does the actual SEO on his site and phones me for help when needed, which is a business model I rather like.

Today, I got a phone call from him, and he told me he'd disappeared off of Google. Worse, his site seemed to have been hijacked. When you type in the search negotiation training, the result that used to be his now looks like this:
You will see that instead of the URL being negotiationdynamics.com, it's now a search result from a known Open Proxy called firewalldown.net. Naturally, efforts to contact these guys and tell them to put a damn robots.txt file in that would exclude the spidering of results has not been successful.

But wait! There's more! If you sign up now for the Google Proxy Hack we'll throw in the following for FREE:

First, Google will cache (and rank) your website (or that of a competitor you don't like):

BUT, when hapless searchers attempt to connect to the site, they are redirected to a domain reseller! Cool! Google R0X! (rolls eyes):

It's not that I blame Google for being gamed - that can happen to any company approaching the size and influence of a public resource. But I *do* blame Google 100% for indexing obvious and PUBLIC proxy results. It's sloppy programming and poor usability.

Proxies are all over the internet. Many are used for perfectly benign purposes. It's simple to identify them. The answer is not to pretend they don't exist, or to attempt to ban all of them. It's certainly not to try to find and kill proxy results one at a time by hand as they are reported, which apparently is Googles current method. The answer is much simpler.

Here is a thought: don't index URLs that have other URLs in them as a variable. Or is that too complicated? This could be done in like 20 minutes. But it hasn't been. For shame.


No comments: