why i can get nothing?

Robert Helmer robert at roberthelmer.com
Sun Jan 15 18:03:18 EST 2012


On Sat, Jan 14, 2012 at 7:54 PM, contro opinion <contropinion at gmail.com> wrote:
> here is my code :
> import urllib
> import lxml.html
> down='http://download.v.163.com/dl/open/00DL0QDR0QDS0QHH.html'
> file=urllib.urlopen(down).
> read()
> root=lxml.html.document_fromstring(file)
> tnodes = root.xpath("//a/@href[contains(string(),'mp4')]")
> for i,add in enumerate(tnodes):
>     print  i,add
>
> why i can get nothing?


The problem is the document. The links you are trying to match on are
inside the script tags in the document, here's a simplified version:

"""
<script>
  obj="<a href='blah.mp4'>";
</script>
"""

So the anchor elements are not part of the DOM as far as lxml is
concerned, lxml does not know how to parse javascript (and even if it
did it would have to execute the JS, and JS would have to modify the
DOM, before you could get this via xpath)

You could have lxml return just the script nodes that contain the text
you care about:
tnodes = root.xpath("//script[contains(.,'mp4')]")

Then you will need a different tool for the rest of this, regex is not
perfect but should be good enough. Probably not worth the effort to
use a real javascript parser if you're just trying to scrape the mp4
links out of this, but it's an option.



More information about the Python-list mailing list