High Performance solutions are needed to do things like urlretrieve

Thomas Jollans thomas at jollans.com
Wed Jul 14 14:04:04 EDT 2010


On 07/14/2010 07:49 PM, David wrote:
> 
> urlretrieve works fine.  However, when file size get very large.  It
> goes on forever, and even fails.
> 
> For instance, one of download .zip file is of 363,096KB.
> 
> Particularly, when trying to get, with urlretrieve, a zipped folder of
> a very large size, it could take up to 20 to 30 minutes.  Often it
> fails, never any problem with smaller folders.  Any solution for this?

Does this only occur with urllib, or is this the case for other
software, like a web browser, or a downloader like wget?

The more you try to copy, the longer it takes. The longer it takes, the
more probable a problem affecting a file becomes. 20 to 30 minutes might
well, depending on the network speed, be a readonable timeframe for
360M. And what does "often" mean here?

In other words: Are you sure urlretrieve is to blame for your problems,
why are you sure, and have you tried using urllib2 or the other urllib
functions?




More information about the Python-list mailing list