"Maximum recursion depth exceeded"...why?
thomasmallen at gmail.com
Thu Feb 19 14:58:11 EST 2009
On Feb 18, 10:15 pm, rdmur... at bitdance.com wrote:
> Thomas Allen <thomasmal... at gmail.com> wrote:
> > On Feb 18, 4:51 am, alex23 <wuwe... at gmail.com> wrote:
> > > On Feb 18, 7:34 pm, rdmur... at bitdance.com wrote:
> > > > Yeah, but wget -r -k will do that bit of it, too.
> > > Wow, nice, I don't know why I never noticed that. Cheers!
> > Hm...doesn't do that over here. I thought it may have been because of
> > absolute links (not to site root), but it even leaves things like <a
> > href="/">. Does it work for you guys?
> It works for me. The sample pages I just tested on it don't use
> any href="/" links, but my 'href="/about.html"' got properly
> converted to 'href="../about.html"'. (On the other hand my '/contact.html'
> got converted to a full external URL...but that's apparently because the
> contact.html file doesn't actually exist :)
Thanks for the help everyone. The idea of counting the slashes was the
linchpin of this little script, and with a little trial and error, I
successfully generated a local copy of the site. I don't think my
colleague knows what went into this, but he seemed appreciative :^)
More information about the Python-list