Google’s Googlebot Causes United Airlines Stock to Plummet Says Tribune Company

Apparently Google’s search agent named Googlebot (their web crawler) caused the stock of United Airlines to plummet on September 7, 2008.

The Tribune Company, in a press released late today, said that “the confusion surrounding a 2002 Chicago Tribune article on the Internet this past weekend started with the inability of Google’s automated search agent “Googlebot” to differentiate between breaking news and frequently viewed stories on the websites of its newspapers.”

Apparently The Tribune Company has identified problems with Googlebot several months ago and they have asked Google to stop using Googlebot to crawl newspaper websites, including The Sun Sentinel (Ft. Lauderdale), for inclusion in Google News. But, the company says that even though they have requested that Googlebot stop crawling, they continued to crawl The Sun Sentinel’s website. Furthermore, the Tribute company believes “that Googlebot continues to misclassify stories.”

Let’s get one thing straight here. I am pretty familiar with Google, Google Webmaster Tools, and the technology used behind web sites. I’ve been doing search engine optimization since 1996. Essentially, if the Tribune Company was to verify their site in Google Webmaster Tools (which it appears that they might have done already), they should be able to stop Google’s bots from crawling their site. Furthermore, it is my opinion that if they were to also have a better robots.txt file on the web site they might be able to further control the crawling of Googlebot. Also, on the back-end of a web site it is possible to identify Googlebot and literally stop them from crawling the site.

The Tribune Company has released a summary of the sequence of events that apparently was started by Googlebot’s crawling The Sun Sentinel’s website in the late-evening and early- morning hours of September 6 and September 7. The summary is as follows:

The article, headlined “United Airlines Files for Bankruptcy,” was originally published in the Chicago Tribune in 2002, and appeared on the newspaper’s website. It then became part of the online database of Tribune’s newspapers. Our records indicate that the Googlebot crawled this story as recently as September 2 and September 3 and apparently treated it as old news.

On September 7, 2008 at 1:00:34 ET, (Sept. 6, 2008, 10:00:34 PT) our records indicate that the article received a single visit. Given the fact that it was the middle of the night, traffic to the business section of the Sun Sentinel site was very low at the time. We believe that this single visit resulted in a link to the old article being created on a dynamic portion of the Sun Sentinel’s business section under a tab called “Popular Stories Business: Most Viewed.”

Again, no new story was published and the old story was not re-published-a link to the old story was merely created. The URL for the old story did not change when the link appeared.

On September 7, at 1:36:03 ET (Sept. 6, 10:36:03) a user of the Sun Sentinel’s website, viewing a story about airline policies regarding cancelled flights, clicked on the link to the old story under the “Popular Stories Business: Most Viewed” tab. Fifty-two seconds later, at 1:36:57 ET (10:36:57 PT), Googlebot visited the Sun Sentinel’s website again and crawled the story.

This time, despite the fact that the URL to the old story hadn’t changed, despite the fact that Googlebot had seen this story previously, it was apparently treated as though it was breaking news. Shortly thereafter, Google provided a link to the old story on Google News and dated it September 6, 2008. Google’s dating the story on Google News made it appear current to Google News users.

The first referral to the story from the link provided by Google News came just three minutes later, at 1:39:59 ET (10:39:59 PT).

Traffic to the old story increased during the course of the day, Sunday, September 7, with the bulk of it being referrals from Google. On Monday, September 8, traffic increased even more after a summary of the Google News story was made available to subscribers of Bloomberg News.

So, from what the Tribune Company is saying, although Google News had previously published this story several years ago, Google News treated this story as if it were breaking news. Not only that, Google News continued to make the story available to Bloomberg News subscribers.

Let me just ask this basic question: Do you believe everything that you hear and read on the Internet? Can we assume that everything in the news is true and correct?

Update 9/11/2008: BlogStorm has written a great post about what happened, and points out all of the duplicate content that might have caused the issue in the first place.

Also, you might want to take a look at Google’s explanation of what happened and why Google News was led to believe that this was a new story and not an old one.

Subscribe to my weekly update newsletter.

* indicates required
Are you a human? Check the box below.
Email Format


  1. Sleep Aid says

    I think this was just a very clever way for the Tribune to get some free publicity by attacking Google and making a lot of the news agencies pick up on the story. You know what they say, any publicity is good publicity.

  2. Singapore SEO says

    As mentioned above, Google has offered webmasters the option to prevent crawling of certain areas within the site by including a code snippet in its robot txt file. It is in the best interest of Tribune to find out or implement robot txt to prevent the crawling of certain information within its site. They can even state in their meta tags the expiration of the page and instruct bots not to crawl certain pages once expired. These are basic onpage optimization that should be done by the webmaster. – Rif Chia

  3. Dinghus says

    They actually have a very valid point, however, Googlebot needs to be able to tell how old an article is before doing anything with it. This is a flaw in Googlebot. However, the newspaper’s own system of deciding what is “hot” obviously failed also. 1 person clicked on the link to the article and that made it “hot” because it was a slow time of day? That is a problem. Obviously.

    If the stock really did drop due to this story going out, then it shows the power Google and other news aggregators have over us. A false story will spread just as fast as a true story and can have a heavy impact on the market.

  4. Erase Bad Credit says

    To be totally honest, I don’t think that Google is as infallible as everyone thinks. Maybe there’s something fishy going on with the Tribune and they weren’t using their robots.txt or webmaster tools correctly but from the article, they seemed to be monitoring the situation closely to know what was going on…

    Look, we all know that Google says one thing and does another regarding their policies. Just saying…

  5. Michael D says

    When I saw this I wondered how many other news articles regularly get indexed that are not new content. I don’t remember any specific items but I’m fairly sure I’ve come across some older (longer than 30 days) posts when sifting through Google news posts. Time to start making screen shots if it happens again.

  6. Glendale Attorney says

    The same problem occurs when a company has multiple local listings. Somehow they end up all getting mixed in with info that doesn’t necessarily match the local results. The Tribune should have certain safeguards in place so that these things won’t repeat itself. Thats interesting though how the Googlebot would craw an old story and then have this mistake compounded by putting a current date on the story.

  7. Gustav says

    It is kind of stupid to ask google to stop crawling and Google itself have given all of us enough tools to stop the crawling.

    Anyways it was interesting that me as a stock market trader took a nice edge because of this glitch, I had a day before shorted the stock, and even though I didnt rode it to the end I did make double what I was supposed to do thanks to this.


  8. Chicago Event Production says

    Maybe the publishers of the information should verify the information before they publish them! When did news reporting turn information turn into picking up stories off of the internet anyway?

  9. Rachelloury says

    Anyway it was interesting that me as a stock market trader took a nice edge because of this glitch, I had a day before shorted the stock, and even though I didnt rode it to the end I did make double what I was supposed to do thanks to this.

  10. Game Economy says

    My first reaction to this was of course, a simple robots.txt file, but clearly something was going wrong here. What’s even more frightening is that a major media company such as tribune wasn’t able to prevent this from happening in the first place?