Webmaster recipes: Difference between revisions
No edit summary |
adding hotlink prevention |
||
Line 1: | Line 1: | ||
===Preventing hot-linking of images=== | |||
Hot-linking is the use of links to content at one site, usually images, by pages on another site. At first, page-designers may have used links to images on other sites because they couldn't create useful images themselves. These days, robots are made to make pages with random content, using both content and images from around the world to create a site in the hopes of fooling search engines into thinking that the site contains useful information on specific topics. | |||
There are hundreds of pages telling how to turn away hot-linkers using Apache's Rewrite module, but Apache suggests you use their built-in directives when possible, and this is one of those cases. So, I found a different way, thanks to the [http://wiki.apache.org/httpd/DisableImageHotLinking wiki] at apache.org. | |||
SetEnvIfNoCase Referer "^https?://([^/]*)?shawnreeves\.net/" local_ref=1 | |||
SetEnvIf Referer ^$ local_ref=1 | |||
<FilesMatch "\.(jpe?g|gif|png)$"> | |||
Order Allow,Deny | |||
Allow from env=local_ref | |||
</FilesMatch> | |||
#The first line uses a regular expression to match referrers that begin with http, with an optional s, followed by ://, then any number of non-slash characters, then the allowed domain (note the backslash to escape the period which normally is a wildcard). If the match is true, then an environmental variable local_ref is set to 1. | |||
#The second line sets the local_ref variable to 1 if there is no referrer, such as when someone browses at an image from a bookmark or uses curl or some special tool for the blind. | |||
#The third through sixth lines apply only if the files requested have image-type extensions. | |||
#The fifth line allows from anyone with the proper reference, then leaves the rest to be denied by the order of the fourth line. | |||
===Multiple whois lookups=== | ===Multiple whois lookups=== | ||
I recently launched a new project at EnergyTeachers.org, [http://energyteachers.org/greendollhousechallenge.php Green Dollhouse Challenge], and I wanted to see who responded to a group email I posted about it. I downloaded the logs and retrieved the list of IP addresses that accessed the page with this command which searches for the word dollhouse in the server's log, taking just the first section which is the IP address, then sorting which is required for uniq, then listing only unique addresses with a count of how many visits came from each address: | I recently launched a new project at EnergyTeachers.org, [http://energyteachers.org/greendollhousechallenge.php Green Dollhouse Challenge], and I wanted to see who responded to a group email I posted about it. I downloaded the logs and retrieved the list of IP addresses that accessed the page with this command which searches for the word dollhouse in the server's log, taking just the first section which is the IP address, then sorting which is required for uniq, then listing only unique addresses with a count of how many visits came from each address: |
Revision as of 19:20, 20 February 2014
Preventing hot-linking of images
Hot-linking is the use of links to content at one site, usually images, by pages on another site. At first, page-designers may have used links to images on other sites because they couldn't create useful images themselves. These days, robots are made to make pages with random content, using both content and images from around the world to create a site in the hopes of fooling search engines into thinking that the site contains useful information on specific topics.
There are hundreds of pages telling how to turn away hot-linkers using Apache's Rewrite module, but Apache suggests you use their built-in directives when possible, and this is one of those cases. So, I found a different way, thanks to the wiki at apache.org.
SetEnvIfNoCase Referer "^https?://([^/]*)?shawnreeves\.net/" local_ref=1 SetEnvIf Referer ^$ local_ref=1 <FilesMatch "\.(jpe?g|gif|png)$"> Order Allow,Deny Allow from env=local_ref </FilesMatch>
- The first line uses a regular expression to match referrers that begin with http, with an optional s, followed by ://, then any number of non-slash characters, then the allowed domain (note the backslash to escape the period which normally is a wildcard). If the match is true, then an environmental variable local_ref is set to 1.
- The second line sets the local_ref variable to 1 if there is no referrer, such as when someone browses at an image from a bookmark or uses curl or some special tool for the blind.
- The third through sixth lines apply only if the files requested have image-type extensions.
- The fifth line allows from anyone with the proper reference, then leaves the rest to be denied by the order of the fourth line.
Multiple whois lookups
I recently launched a new project at EnergyTeachers.org, Green Dollhouse Challenge, and I wanted to see who responded to a group email I posted about it. I downloaded the logs and retrieved the list of IP addresses that accessed the page with this command which searches for the word dollhouse in the server's log, taking just the first section which is the IP address, then sorting which is required for uniq, then listing only unique addresses with a count of how many visits came from each address:
grep dollhouse access_log.2011-03-29|cut -d " " -f1| sort|uniq -c
I was taking each resulting IP and copying and pasting it after typing whois in another shell to find clues to whether the visitor was a search spider or a real person. I learned (from http://www.tek-tips.com/viewthread.cfm?qid=1566237&page=7 ) that I could use an inline text editor to type "whois " and the result from the above command, without the count, and then pass that to a shell within this shell to process each line as a command:
grep dollhouse access_log.2011-03-29 | cut -d " " -f1 | sort | uniq | awk '{print "whois " $1}' | sh
awk takes each line, prepends "whois ", and then sends it to the shell "sh" to process.
Search queries
Open a terminal and go to a directory full of Apache log files. Enter the following command (all on one line):
egrep -r -h -o ".*q=[^&]*" ./* |awk '{print $1"\t"substr($11,match($11,"q=")+2)}' |php -R 'echo substr(urldecode($argn),stripos($argn,"&"))."\n";' > ../SearchQueriesIP.txt
Egrep will go through all the files in the folder (-r and ./*); find strings that have q= up to the next ampersand, which is usually how a search engine reports the query string that someone entered before they clicked on a result to get to our site; only output the matching part (-o); and skip listing the filename from the match (-h).
Next, awk picks the IP address of the visitor ($1), a tab (\t), and then the query string ($11), leaving out the first two characters (q=). PHP then takes each line ($argn) and decodes the text, changing plus signs to spaces and so on. It also removes any unexplained extra bits following ampersands; this will become unnecessary when I figure out how some ampersands are slipping through.
Finally, the results are saved to a file using the redirect symbol (>), in the next directory up (../) so egrep doesn't search its own output.
Issues with this analysis
- q= might be in the request
- If the request string includes the string q=, then this would return that request instead of the referrer's query. A solution may be to use awk instead of grep, only checking the 11th field.
- Analysis of requests
- This doesn't output or process the request field. Easy enough to fix, we could just add field $7 to the print command in awk, or some significant substring of $9.
It's a little sad to see so many people type energyteachers.org into google instead of directly into the address area of the browser. I guess Google has no problem being seen as the gateway to the internet, even with the futile bandwidth usage.
Better performing with awk
Here's a more awkward process, but it only has one pipe.
awk '{if ($11 !~ /q=/) next; split($11,queries,"="); for (var in queries) if (match(queries[var],/q$/)) searched=queries[var+1]; print $1"\t"$7"\t"substr(searched,1,match(searched,"&")-1)}' access_log* |php -R 'echo urldecode($argn)."\n";'
The first line skips input lines that don't have "q=".
The second line splits the referrer line by equal signs, essentially separating fields, into an array "queries". The third line looks for the item in queries that ends with q, setting our target to the next item in the array, since it must follow "q=".
The fourth line prints the IP address of the requester, the page requested, and search query. The fifth line takes each line ($argn) and decodes the text, changing plus signs to spaces and so on.
Popularity of a page over time
awk '$1 !~"MY_WORK_IP" && $7 ~ /PAGE_NAME/ \ {print substr($4,5,3)" "substr($4,2,2)}' access* |uniq -c
This script skips all requests from my own IP, so I don't count my own hits, includes all requests for a certain PAGE_NAME or subset of pages with a certain text in their name, and returns the month and date, then counts how many hits there are each date. Note the backslash at the end of the first line is not part of the awk script but just a way to split this shell command into two lines. I.e., if you use it on one line, remove the backslash.