Using LogParser With Awstats To Filter AVG Spam

Following on from my post LogParser to the rescue, I’ve now worked out how to integrate logparser into the Awstats update process with very minimal effort.

Note: Awstats is a cross platform web analysis tool, but unfortunately logparser isn’t, this therefore is windows only.

To make life easier, I dropped the logparser files (exe and dll, although I’m not sure you need the dll) directly in to the cgi-bin where Awstats lives on the server. I understand doing this may have security implications, so do this at your own risk.

Open up the config file for your Awstats report (awstats.<config>.conf) and find the LogFile directive

LogFile=”E:/logs/W3SVC2074709632/ex%YY-1%MM-1%DD-1.log”

It’ll be something like the above, assuming you use daily logs on IIS. We need to change it to

LogFile=”logparser -i:iisw3c -o:w3c -rtp:-1 -stats:off file:rem-avg-spam.sql?logfile=E:/logs/W3SVC2074709632/ex%YY-1%MM-1%DD-1.log |”

This tells Awstats to execute logparser setting any necessary options and passing in the path to the log as before, it then grabs the output from the pipe and processes it.

That’s it!

The contents of my rem-avg-spam.sql file is just

select *
from %logfile%
where not (cs(User-Agent)=’Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;1813)’
or cs(User-Agent)=’Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1)’
and cs(Cookie) is null
and cs(Referer) is null)

I’m now using this for some fairly large logs (100mb+) and it works fine.

I hope this helps.

add to del.icio.us :: Bookmark Post in Technorati :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank :: post to facebook :: Bookmark on Google :: Add to Netscape :: Share on Yahoo :: Add this to Live

LogParser To The Rescue

Warning: This doesn’t seem to work with very large IIS logfiles, I tried with a 750mb file which didn’t error but was unreadable with a disk full error. My mistake, it does work, it’s TextPad that can’t handle it.

Microsoft LogParser may be the answer to our AVG logfile spam woes, I’m been fiddling with it and have come up with a quite simple way pre-processing the logs with logparser to remove the offending spam. You can put the query inline within the logparser commandline, but it’s easier to stick it in a file once it gets a bit longer. So, I have this in my file

select * into c:\logs\ex%log%out.log
from c:\logs\ex%log%.log
where not (cs(User-Agent)=’Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;1813)
or cs(User-Agent)=’Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1)’
and cs(Cookie) is null
and cs(Referer) is null)

Which you run with

C:\Program Files\Log Parser 2.2>LogParser -i:iisw3c -o:w3c file:c:\logs\avgspam.
sql?log=080601

Which yields something like

Statistics:
———–
Elements processed: 209607
Elements output: 151434
Execution time: 8.47 seconds

This shows that logparser has removed ~58K rows from our log and created a new log that we can feed to our stats program.

I’ve tested this with a couple of logs from different clients now and it seems to to work.

Let me know if you have improvements or have come up with a different work around for this.

AVG better not start using more UserAgent strings though as this could get very messy.

If you’re new to logparser (as I was), this guide has some good (IIS centric) examples – Analysing IIS logs with LogParser

Update: I’ve now worked out how to use logparser with Awstats to filter out AVG spam. If you’re using Awstats this is a very quick fix.

add to del.icio.us :: Bookmark Post in Technorati :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank :: post to facebook :: Bookmark on Google :: Add to Netscape :: Share on Yahoo :: Add this to Live

AVG Destroys Web Analytics

I had a call from a client yesterday who was concerned that their web stats had taken a sharp increase over the weekend. An increase shouldn’t normally cause concern but the client is (quite rightly) very skeptical about such increases. I said I’d investigate the source of the extra traffic and hopefully put his mind at rest.

A quick look at the Awstats reports is normally enough to highlight an issue if one exists, this is usually a new spider or some kind of strange spidering activity, in this case it wasn’t. I couldn’t see anything that looked out of the ordinary, this was an increase in visits with a reasonably sensible increase in page views etc. What next, I thought. “Is the traffic from Google?” The site in question normally receives about 75% of it’s traffic from Google, a quick tot up of the figures showed this was looking more in the region of 30% for this month. OK, so it’s an increase in direct traffic, a massive increase in fact, time to delve in to the log files by hand!

I copied across the previous days log file to my laptop and dropped it in to Textpad (I’m always amazed how well it copes with large text files, nice one textpad!) I started to look through the file and it looked reasonably normal, then I spotted a block of requests, only half a dozen or so, for the same file one after another. The thing that made this particularly odd, was that the file being requested was a tracking page used within the site to record data back to the SQL server. I continued to sift through the file and noticed the same block several more times, each time however, from a different IP address, completely different, not even the same range. Could this be a DDoS, I thought, possibly, although we’ve never seen one before. I tried to look for some commonality between the blocks and noticed they all had no referrer information and all seemed to use the same (slightly strange looking) user agent (UA). The user agent in question was

Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;1813)

I googled this and spotted a Webmaster World entry entitled AVG Toolbar Glitch May Be Causing Visitor Loss, sounds interesting I thought. To be honest, it was the only link that wasn’t to somebody’s stats page! At least I’m not alone on this one, I thought.

The forum discussion on Webmaster World described exactly what I was seeing, with many webmasters seeing it. Unfortunately, this isn’t down to a rogue spider, hack attempt, DDoS, no, it’s the latest version of AVG anti-virus.

Grisoft (the people behind AVG) purchased LinkScanner back in December 2007, one of it’s features being

LinkScanner automatically analyzes results returned by Google and other search engines and places a check mark next to sites believed to be safe.

In fact, LinkScanner analyses results from search engines (not just Google) and is browser independent. This may sound like a good idea from a security point of view, however, from a webmaster/website owner point of view, this is not good at all.

If your site appears well in the search engines, as everyone strives to do, your website is or is going to be hugely affected by this. Essentially this means, that everytime your site appears in a users results, regardless of whether they click on it, your website logfiles and thefore your statistics will show that person as a real visitor coming to your site. Now, because the IP address is the users IP address, we can’t filter on that, at first look it would appear we can filter on this useragent, unfortunately I spotted another one

Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1)

This one however, is even worst. This time it’s a legitimate user agent which means you can’t filter it out or rewrite it to another page on your site without the risk of blocking or harming real visitors. The first user agent is different, due to lack of a space (or plus) between the last semi-colon and the 1813, it doesn’t follow the standard pattern used by Microsoft.

So, we get to crux of the problem, AVG has destroyed web analytics for people who use a logfile analysis tool. Not only have they done this, they are also wasting our bandwidth and our disk space on servers!

Can we filter it out of our logs? Perhaps. They do seem to follow a pattern.

  • A request for the result in the SERP (often missing the trailing slash)
  • One or more requests for associated JavaScript files
  • A subsequent request for the root of the site
  • One or more further requests for associated JavaScript files

This is the pattern, it also serves as a prefetching routine which may speed up your eventual click on a result, if you do, that is.

I’m no Perl expert (.net is my bag), but I’m pretty sure a Perl guru could knock up a quick log processing script that parses your logs (IIS and Apache versions would differ, I guess) and removes this spam. It is spam at the end of the day, we didn’t ask for it and it’s wasting our resources dealing with it.

Any takers?

I’ve now disbaled the linkscanner component from my machine at home and am encouraging that friends do the same. To be honest I’m considering ditching it completely and using something else. I used to recommend AVG to everyone, I can’t do that anymore.

UPDATE: I have a possible LogParser solution, let me know if it helps.

Note: If you’re not seeing the block of requests for a single file in your logs but think you’re seeing this problem, I’ll explain why we were/are seeing that. Essentially we include a link to an ASP page as the source of a JavaScript include, it sounds a bit dodgy but it does the job. I think linkscanner is expecting a header or similar from this request which it doesn’t receive as it’s not really returning the file it thinks it is. I suspect that it’s therefore requesting the page again and again until it gives up. I intend to get rid of this tracker ASAP and implement it in a more elegant way!

add to del.icio.us :: Bookmark Post in Technorati :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank :: post to facebook :: Bookmark on Google :: Add to Netscape :: Share on Yahoo :: Add this to Live