Distinguishing Real Web Crawlers from Fakes: Googlebot Example

Nilani Algiriyage1, Gihan Dias2, Sanath Jayasena2

  • 1University of Kleniya, Sri Lanka
  • 2University of Moratuwa, Sri Lanka

Details

09:00 - 09:15 | Thu 31 May | Seminar Room | T.1.3-1

Session: Big Data, Machine Learning, and Cloud Computing

Abstract

Web crawlers are programs or automated scripts that scan web pages methodically to create indexes. Search engines such as Google, Bing use crawlers in order to provide web surfers with relevant information. Today there are also many crawlers that impersonate well-known web crawlers. For example, it has been observed that Google's Googlebot crawler is impersonated to a high degree. This raises ethical and security concerns as they can potentially be used for malicious purposes. In this paper, we present an effective methodology to detect fake Googlebot crawlers by analyzing web access logs. We propose using Markov chain models to learn profiles of real and fake Googlebots based on their patterns of web resource access sequences. We have calculated log-odds ratios for a given set of crawler sessions and our results show that the higher the log-odds score, the higher the probability that a given sequence comes from the real Googlebot. Experimental results show, at a threshold log-odds score we can distinguish the real Googlebot from the fake.