using urllib on a more complex site
clp2 at rebertia.com
Mon Feb 25 01:27:54 CET 2013
On Sunday, February 24, 2013, Adam W. wrote:
> I'm trying to write a simple script to scrape
> in order to send myself an email every day of the 99c movie of the day.
> However, using a simple command like (in Python 3.0):
> I don't get the all the source I need, its just the navigation buttons.
> useful data later, so my question is how do I make urllib "wait" and grab
> that data as well?
urllib isn't a web browser. It just requests the single (in this case,
HTML) file from the given URL. It does not parse the HTML (indeed, it
doesn't care what kind of file you're dealing with); therefore, it
obviously does not retrieve the other resources linked within the document
to "wait" for; urllib is already doing everything it was designed to do.
Your best bet is to open the page in a web browser yourself and use the
developer tools/inspectors to watch what XHR requests the page's scripts
are making, find the one(s) that have the data you care about, and then
make those requests instead via urllib (or the `requests` 3rd-party lib, or
whatever). If the URL(s) vary, reverse-engineering the scheme used to
generate them will also be required.
Alternatively, you could use something like Selenium, which let's you drive
an actual full web browser (e.g. Firefox) from Python.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-list