Problem with urllib.urlretrieve

fishboy fishboy at spamspamspam.com
Sat Jun 12 06:49:29 CEST 2004


On 11 Jun 2004 16:01:01 -0700, ralobao at gmail.com (ralobao) wrote:

>Hi,
>
>i am doing a program to download all images from an specified site.
>it already works with most of the sites, but in some cases like:
>www.slashdot.org it only download 1kb of the image. This 1kb is a html
>page with a 503 error.
>
>What can i make to really get those images ?
>
>Thanks
>
>Your Help is aprecciate.

I did something like this a while ago.  I used websucker.py in the
Tools/ directory.  And then added some conditionals to tell it to only
create files for certain extentions.

As to why it fails in your case, (/me puts on psychic hat) I guessing
slashdot does something to stop people from deep-linking their image
files to stop leeches.

><{{{*>




More information about the Python-list mailing list