[Tutor] how to set time limit before downloading file

kulibin da-da! kylibine at gmail.com
Thu Aug 24 13:58:06 CEST 2006


I had to download some files.

 Using the code below I could download almost all I had to download, but I
 have a problem with 'http://18wheels.wz.cz/csszengarden/back-con.jpg'

 This is my code for downloading
'http://18wheels.wz.cz/csszengarden/back-con.jpg'

# -*- coding: windows-1251 -*-
import re, os

import urllib


def download_file_using_urlopen (url, out_dir=''):
    data = []
    if not os.path.isdir(out_dir) and out_dir != '':
        os.mkdir(out_dir)

    filename = os.path.join(out_dir, os.path.basename(url))

    f = open(filename, "wb")
    print "%s -->" % filename
    r = urllib.urlopen(url)
    print 'info:\t', r.info()
    while 1:
        d = r.read(1024)
        if not d: break
        print '.',
        data.append(d)
    data = ''.join(data)
    f.write(data)
    f.close()


urls = ['http://18wheels.wz.cz/csszengarden/back-con.jpg']
for url in urls:
    out_dir= 'out'
    print url
    data = download_file_using_urlopen(url, out_dir)
    print '\n'*3



And the problem is that the script sleeps for some hours. And thats
all - no data is read.

 Can you show me what is wrong and
1. how to set time for file downloading after what it will ignore this
file and start working with next file.
2. how to set time for every "d = r.read(1024)" after what script will
ignore this file and start working with next file.


 Thanks in advance.


More information about the Tutor mailing list