I had a call from a client yesterday who was concerned that their web stats had taken a sharp increase over the weekend. An increase shouldn’t normally cause concern but the client is (quite rightly) very skeptical about such increases. I said I’d investigate the source of the extra traffic and hopefully put his mind at rest.
A quick look at the Awstats reports is normally enough to highlight an issue if one exists, this is usually a new spider or some kind of strange spidering activity, in this case it wasn’t. I couldn’t see anything that looked out of the ordinary, this was an increase in visits with a reasonably sensible increase in page views etc. What next, I thought. “Is the traffic from Google?” The site in question normally receives about 75% of it’s traffic from Google, a quick tot up of the figures showed this was looking more in the region of 30% for this month. OK, so it’s an increase in direct traffic, a massive increase in fact, time to delve in to the log files by hand!
I copied across the previous days log file to my laptop and dropped it in to Textpad (I’m always amazed how well it copes with large text files, nice one textpad!) I started to look through the file and it looked reasonably normal, then I spotted a block of requests, only half a dozen or so, for the same file one after another. The thing that made this particularly odd, was that the file being requested was a tracking page used within the site to record data back to the SQL server. I continued to sift through the file and noticed the same block several more times, each time however, from a different IP address, completely different, not even the same range. Could this be a DDoS, I thought, possibly, although we’ve never seen one before. I tried to look for some commonality between the blocks and noticed they all had no referrer information and all seemed to use the same (slightly strange looking) user agent (UA). The user agent in question was
Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;1813)
I googled this and spotted a Webmaster World entry entitled AVG Toolbar Glitch May Be Causing Visitor Loss, sounds interesting I thought. To be honest, it was the only link that wasn’t to somebody’s stats page! At least I’m not alone on this one, I thought.
The forum discussion on Webmaster World described exactly what I was seeing, with many webmasters seeing it. Unfortunately, this isn’t down to a rogue spider, hack attempt, DDoS, no, it’s the latest version of AVG anti-virus.
Grisoft (the people behind AVG) purchased LinkScanner back in December 2007, one of it’s features being
LinkScanner automatically analyzes results returned by Google and other search engines and places a check mark next to sites believed to be safe.
In fact, LinkScanner analyses results from search engines (not just Google) and is browser independent. This may sound like a good idea from a security point of view, however, from a webmaster/website owner point of view, this is not good at all.
If your site appears well in the search engines, as everyone strives to do, your website is or is going to be hugely affected by this. Essentially this means, that everytime your site appears in a users results, regardless of whether they click on it, your website logfiles and thefore your statistics will show that person as a real visitor coming to your site. Now, because the IP address is the users IP address, we can’t filter on that, at first look it would appear we can filter on this useragent, unfortunately I spotted another one
Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1)
This one however, is even worst. This time it’s a legitimate user agent which means you can’t filter it out or rewrite it to another page on your site without the risk of blocking or harming real visitors. The first user agent is different, due to lack of a space (or plus) between the last semi-colon and the 1813, it doesn’t follow the standard pattern used by Microsoft.
So, we get to crux of the problem, AVG has destroyed web analytics for people who use a logfile analysis tool. Not only have they done this, they are also wasting our bandwidth and our disk space on servers!
Can we filter it out of our logs? Perhaps. They do seem to follow a pattern.
- A request for the result in the SERP (often missing the trailing slash)
- One or more requests for associated JavaScript files
- A subsequent request for the root of the site
- One or more further requests for associated JavaScript files
This is the pattern, it also serves as a prefetching routine which may speed up your eventual click on a result, if you do, that is.
I’m no Perl expert (.net is my bag), but I’m pretty sure a Perl guru could knock up a quick log processing script that parses your logs (IIS and Apache versions would differ, I guess) and removes this spam. It is spam at the end of the day, we didn’t ask for it and it’s wasting our resources dealing with it.
Any takers?
I’ve now disbaled the linkscanner component from my machine at home and am encouraging that friends do the same. To be honest I’m considering ditching it completely and using something else. I used to recommend AVG to everyone, I can’t do that anymore.
UPDATE: I have a possible LogParser solution, let me know if it helps.
Note: If you’re not seeing the block of requests for a single file in your logs but think you’re seeing this problem, I’ll explain why we were/are seeing that. Essentially we include a link to an ASP page as the source of a JavaScript include, it sounds a bit dodgy but it does the job. I think linkscanner is expecting a header or similar from this request which it doesn’t receive as it’s not really returning the file it thinks it is. I suspect that it’s therefore requesting the page again and again until it gives up. I intend to get rid of this tracker ASAP and implement it in a more elegant way!
::
::
::
::
::
::
::
::
::
::
::
::
::
::
::
:: 