Recent Posts

November 2009

(Nerd) Mechanize & Javascript

This is from the mechanize site, wish I would have read it before I started.

Since Javascript is completely visible to the client, it cannot be used to prevent a scraper from following links. But it can make life difficult, and until someone writes a Javascript interpreter for Perl or a Mechanize clone to control Firefox, there will be no general solution. But if you want to scrape specific pages, then a solution is always possible.

One typical use of Javascript is to perform argument checking before posting to the server. The URL you want is probably just buried in the Javascript function. Do a regular expression match on $mech->content() to find the link that you want and $mech->get it directly (this assumes that you know what you are looking for in advance).

In more difficult cases, the Javascript is used for URL mangling to satisfy the needs of some middleware. In this case you need to figure out what the Javascript is doing (why are these URLs always really long?). There is probably some function with one or more arguments which calculates the new URL. Step one: using your favorite browser, get the before and after URLs and save them to files. Edit each file, converting the the argument separators (‘?’, ‘&’ or ‘;’) into newlines. Now it is easy to use diff or comm to find out what Javascript did to the URL. Step 2 – find the function call which created the URL – you will need to parse and interpret its argument list. Using the Javascript Debugger Extension for Firefox may help with the analysis. At this point, it is fairly trivial to write your own function which emulates the Javascript for the pages you want to process.

Here’s annother approach that answers the question, “It works in Firefox, but why not Mech?” Everything the web server knows about the client is present in the HTTP request. If two requests are identical, the results should be identical. So the real question is “What is different between the mech request and the Firefox request?”

The Firefox extension “Tamper Data” is an effective tool for examining the headers of the requests to the server. Compare that with what LWP is sending. Once the two are identical, the action of the server should be the same as well.

I say “should”, because this is an oversimplification – some values are naturally unique, e.g. a SessionID, but if a SessionID is present, that is probably sufficient, even though the value will be different between the LWP request and the Firefox request. The server could use the session to store information which is troublesome, but that’s not the first place to look (and highly unlikely to be relevant when you are requesting the login page of your site).

Generally the problem is to be found in missing or incorrect POSTDATA arguments, Cookies, User-Agents, Accepts, etc. If you are using mech, then redirects and cookies should not be a problem, but are listed here for completeness. If you are missing headers, $mech->add_header can be used to add the headers that you need.


(Nerd) Python2.6, Screen scraping, and Javascript cookies

Recently I tried *scraping some data from a website and was running into problems. I don’t have a fix at the moment but I made the first big break through.

My first attempt at scraping the data with Python was met with immediate denial. And I was able to get similar results (though not exact) by disabling cookies in my browser (firefox3.5) and accessing the desired site. The fact that the results were not identical confused me some. But I figured it was a subtle difference in the way Firefox handled the request versus how I was handling the request programmatically with Python, mechanize, urllib2, and cookielib.

Still, after several hours I still was unable to make the desired request to the server. So I started doing some digging. It turns out that these libraries are unable to automatically handle cookies set by Javascript. So, to test this, I disabled Javascript in my browser, made the request, and got the exact same results. YES!!!

As a quick test I was able to extract the cookies’ value using the LiveHeader extension in Firefox. I then took this value and manually assigned it to the header of my Python request. I then got the desired results in my Python program. I’ll post an example of my solution when I get it up and running.


In your browser I would do the following in order to try and recreate what is happening in your program.

  1. Disable Javascript
  2. Disable Cookies
  3. Access headers with Firefox plugin

*Programmatically extracting data from a website