nobody at nowhere.com
Sun Jan 31 21:57:44 CET 2010
On Sat, 30 Jan 2010 11:28:47 -0800, KB wrote:
> Thanks! I don't see a documentation page, but how would one use this?
> Would you download the HTML using urllib2/mechanize, then parse for the
> .js script and use spider-monkey to "execute" the script and the output is
> passed back to python?
Something like that.
The "homepage" link:
has some examples further down. The first one starts with:
>>> import spidermonkey
>>> rt = spidermonkey.Runtime()
>>> cx = rt.new_context()
>>> cx.execute("var x = 3; x *= 4; x;")
It goes on to mention the .add_global(name, object) method, using it to
name a Python object such that its variables and methods can be accessed
For scraping web pages, I suspect that you'll at least need to create an
object and name it "document", so that "document.location" etc works. How
much you'll need to implement will depend upon what the web page uses,
although you can probably use .__getattr__() to serve up dummy handlers
for calls which can be ignored.
Futher documentation is probably a case of "read the spidermonkey
documentation, then read the python-spidermonkey source if it isn't clear
how the wrapper relates to spidermonkey itself".
More information about the Python-list