parsing complex web pages

John Hunter jdhunter at
Wed Jun 18 23:16:03 CEST 2003

I sometimes parse very complex web pages like

to get all the quantitative information out of them.  I find that
parsing the text is easier than parsing the html, because there is so
much extra syntax in the html that requires writing complex regexs or
parsers, whereas with the text version simple string splits and finds
often suffice.

The current strategy I use is to call lynx via popen to convert the
page to text.  If you do:

  lynx -dump

you'll see that the text comes back for the most part in a nicely
formatted, readily parsable layout; eg, the layout of many of the
tables is preserved.

If you do one of the standard html2txt conversions using HTMLParser
and a DumbWriter, the text layout is not as well preserved (albeit
still parsable)

    from urllib import urlopen
    import htmllib, formatter

    class Catcher:
        def __init__(self):
            self.lines = []
        def write(self, line):
        def __getitem__(self, index):
            return self.lines[index]
        def read(self):
            return ' '.join(self.lines)

    def html2txt( fh ):
        oh = Catcher()
        p = htmllib.HTMLParser(

    print html2txt(urlopen(''))

I am more or less happy with the lynx solution, but would be happier
with a pure python solution.

Is there a good python -> text solution that preserves table layouts
and other visual formatting?

What do people recommend for quick parsing of complicated web pages?

John Hunter

More information about the Python-list mailing list