using urllib on a more complex site

Chris Rebert clp2 at
Mon Feb 25 01:27:54 CET 2013

On Sunday, February 24, 2013, Adam W. wrote:

> I'm trying to write a simple script to scrape
> in order to send myself an email every day of the 99c movie of the day.
> However, using a simple command like (in Python 3.0):
> urllib.request.urlopen('
> )
> I don't get the all the source I need, its just the navigation buttons.
>  Now I assume they are using some CSS/javascript witchcraft to load all the
> useful data later, so my question is how do I make urllib "wait" and grab
> that data as well?

urllib isn't a web browser. It just requests the single (in this case,
HTML) file from the given URL. It does not parse the HTML (indeed, it
doesn't care what kind of file you're dealing with); therefore, it
obviously does not retrieve the other resources linked within the document
(CSS, JS, images, etc.) nor does it run any JavaScript. So, there's nothing
to "wait" for; urllib is already doing everything it was designed to do.

Your best bet is to open the page in a web browser yourself and use the
developer tools/inspectors to watch what XHR requests the page's scripts
are making, find the one(s) that have the data you care about, and then
make those requests instead via urllib (or the `requests` 3rd-party lib, or
whatever). If the URL(s) vary, reverse-engineering the scheme used to
generate them will also be required.

Alternatively, you could use something like Selenium, which let's you drive
an actual full web browser (e.g. Firefox) from Python.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the Python-list mailing list