I was questioned today by a developer who was watching a particular IP address scan his site. The IP was 188.8.131.52 and is registered to Microsoft Corp. located at One Microsoft Way, Redmond, Washington 98052. This visitor was not sending the normal header information associated with a crawler to the web server such as an http robot name or identifying info or even a browser name.
Is Microsoft “using” Google’s search results to populate their index? Discuss Microsoft’s behavior at WebProWorld.
The behavior it demonstrated made it look like a crawler, especially since it was spidering urls that were no longer in existence (search engine spiders crawl site segments at regular intervals and often come back when an initial crawl left urls uncrawled) and doing so at the rate of 1 page every 3 – 5 seconds. The visitor started their visit at 7:37 am and was still on the site at 12:00 pm.
Correction, the data was there after all, here’s the crawler info… msnbot/0.3 (+http://search.msn.com/msnbot.htm)
Here’s the kicker
So now you’re saying, so what, big deal. But this really is a big deal. It’s a big deal not only because the urls this visitor was making requests to don’t exist any longer but because the only place these urls can be found is in Google’s search results using site:www.sitename.com. A similar query on MSN Search doesn’t show the urls at all, even on the beta version of their new . But then within just hours of the visitors exit from the site the new same search at Microsoft’s new search engine shows all of the urls in question being fully indexed within its results.
My Theory On This Mysterious Microsoft Crawler
The old msn required a fee to be crawled by its spider. But a few months back MSN dropped the fee and said they were going to begin crawling the entire web and doing it without charge. However, that’s no easy task. So I believe MSN is using the results from Google and possibly even Yahoo to get all of the pages they’ve indexed on sites that have a relatively low page count in the current msn search engine.
First off, that’s the fastest way to get the relevant pages from a web site. Sure they could just go to the site directly and start crawling but in doing so they’re going to get tons of duplicate urls and urls that seem different but point to the same content. Crawling Google’s results will eliminate the bandwidth to some extent but will not completely take care of the duplicate content issue their spider will encounter.
Secondly, crawling Google’s results can act as a qualitative measure for their new search engine. By creating a baseline number of pages per site when the new Microsoft Search is launched and running a comparison on a regular interval for the next 6 months, they’ll be able to determine internally if their engine is finding and indexing the same links and as many links as Google. Call it competitive analysis or whatever you want.
So Microsoft’s Screen Scraping?
Obviously my conclusion should be taken as a grain of salt but it’s a definite possibility. Microsoft very well could be screen scraping Google (or maybe even using their API, LOL) and crawling the urls it finds. It makes sense from a business case but I wonder if there are any legal issues there. I doubt it. It’s like putting garbage out to the curb. Once it’s out there it’s fair game but I bet Google’s lawyers would have more to say than that on the case.
Has anyone out there seen similar behavior on their own sites? Please comment with your qualitative/objective data if so.
Jason’s article first appeared on his blog .
This article is from the WebProNews newsletter, click here to visit them.