Webmaster recipes

From ShawnReevesWiki
Revision as of 19:13, 2 April 2011 by Shawn (talk | contribs)
Jump to navigationJump to search

Multiple whois lookups

I recently launched a new project at EnergyTeachers.org, Green Dollhouse Challenge, and I wanted to see who responded to a group email I posted about it. I downloaded the logs and retrieved the list of IP addresses that accessed the page with this command which searches for the word dollhouse in the server's log, taking just the first section which is the IP address, then sorting which is required for uniq, then listing only unique addresses with a count of how many visits came from each address:

grep dollhouse access_log.2011-03-29|cut -d " " -f1| sort|uniq -c

I was taking each resulting IP and copying and pasting it after typing whois in another shell to find clues to whether the visitor was a search spider or a real person. I learned (from http://www.tek-tips.com/viewthread.cfm?qid=1566237&page=7 ) that I could use an inline text editor to type "whois " and the result from the above command, without the count, and then pass that to a shell within this shell to process each line as a command:

grep dollhouse access_log.2011-03-29 | cut -d " " -f1
| sort | uniq | awk '{print "whois " $1}' | sh

awk takes each line, prepends "whois ", and then sends it to the shell "sh" to process.

Search queries

Open a terminal and go to a directory full of Apache log files. Enter the following command (all on one line):

egrep -r -h -o ".*q=[^&]*" ./*
|awk '{print $1"\t"substr($11,match($11,"q=")+2)}'
|php -R 'echo substr(urldecode($argn),stripos($argn,"&"))."\n";'
> ../SearchQueriesIP.txt

Egrep will go through all the files in the folder (-r and ./*); find strings that have q= up to the next ampersand, which is usually how a search engine reports the query string that someone entered before they clicked on a result to get to our site; only output the matching part (-o); and skip listing the filename from the match (-h).

Next, awk picks the IP address of the visitor ($1), a tab (\t), and then the query string ($11), leaving out the first two characters (q=). PHP then takes each line ($argn) and decodes the text, changing plus signs to spaces and so on. It also removes any unexplained extra bits following ampersands; this will become unnecessary when I figure out how some ampersands are slipping through.

Finally, the results are saved to a file using the redirect symbol (>), in the next directory up (../) so egrep doesn't search its own output.

Issues with this analysis

q= might be in the request
If the request string includes the string q=, then this would return that request instead of the referrer's query. A solution may be to use awk instead of grep, only checking the 11th field.
Analysis of requests
This doesn't output or process the request field. Easy enough to fix, we could just add field $7 to the print command in awk, or some significant substring of $9.

It's a little sad to see so many people type energyteachers.org into google instead of directly into the address area of the browser. I guess Google has no problem being seen as the gateway to the internet, even with the futile bandwidth usage.

Better performing with awk

Here's a more awkward process, but it only has one pipe.

awk '{if ($11 !~ /q=/) next;
split($11,queries,"=");
for (var in queries) if (match(queries[var],/q$/)) searched=queries[var+1];
print $1"\t"$7"\t"substr(searched,1,match(searched,"&")-1)}' access_log*
|php -R 'echo urldecode($argn)."\n";'

The first line skips input lines that don't have "q=".

The second line splits the referrer line by equal signs, essentially separating fields, into an array "queries". The third line looks for the item in queries that ends with q, setting our target to the next item in the array, since it must follow "q=".

The fourth line prints the IP address of the requester, the page requested, and search query. The fifth line takes each line ($argn) and decodes the text, changing plus signs to spaces and so on.

Popularity of a page over time

awk '$1 !~"MY_WORK_IP" && $7 ~ /dollhouse/
{print substr($4,5,3)" "substr($4,2,2)}' access*
|uniq -c

This script skips all requests from my own IP, so I don't count my own hits, includes all requests for a certain page or subset of pages with a certain text in their name, and returns the month and date, then counts how many hits there are each date.