urllib.urlretireve problem

gene.tani at gmail.com gene.tani at gmail.com
Wed Mar 30 07:58:56 CEST 2005


 Mertz' "Text Processing in Python" book had a good discussion about
trapping 403 and 404's.

http://gnosis.cx/TPiP/

Larry Bates wrote:
> I noticed you hadn't gotten a reply.  When I execute this it put's
the following
> in the retrieved file:
>
> <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <HTML><HEAD>
> <TITLE>404 Not Found</TITLE>
> </HEAD><BODY>
> <H1>Not Found</H1>
> The requested URL
/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb was no
> t found on this server.<P>
> </BODY></HTML>
>
> You will probably need to use something else to first determine if
the URL
> actually exists.
>
> Larry Bates
>
>
> Ritesh Raj Sarraf wrote:
> > Hello Everybody,
> >
> >  I've got a small problem with urlretrieve.
> > Even passing a bad url to urlretrieve doesn't raise an exception.
Or does
> > it?
> >
> > If Yes, What exception is it ? And how do I use it in my program ?
I've
> > searched a lot but haven't found anything helping.
> >
> > Example:
> > try:
> >
> >
urllib.urlretrieve("http://security.debian.org/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb")
> > except IOError, X:
> > DoSomething(X)
> > except OSError, X:
> > DoSomething(X)
> >
> > urllib.urlretrieve doesn't raise an exception even though there is
no
> > package named libparl5.6
> > 
> > Please Help!
> > 
> > rrs




More information about the Python-list mailing list