Automatically resume a download w/ urllib?
Chris Moffitt
chris_moffitt at yahoo.com
Sat Oct 20 22:02:52 EDT 2001
I'm trying to develop a small application (mostly for Windows if that
matters), than can download and install binary files stored on a web server.
I've been able to get urllib to download the file just fine, but what I
would like to do is be able to resume a download if it is cancelled for
some reason. In other words, if I've downloaded 50k of a 200k file, I'd
like to just download the remaining 150k.
Is this possible using HTTP and Apache? If not, what about FTP? If
neither of those, does that mean I need to write my own pseudo-server of
some sort that can spool out the file appropriately?
If it helps, here's my non-working code (ripped from the Python Std
Library) that contains a seek statement.
#!/usr/local/bin/python
import urllib, os
dlFile = "sm.tar.gz"
webFile = urllib.urlopen("http://192.168.1.4/sm-cvs-20010519.tar.gz")
existSize = 0
if os.path.exists(dlFile):
outputFile = open(dlFile,"ab")
existSize = os.path.getsize(dlFile)
else:
outputFile = open(dlFile,"wb")
n = 0
while 1:
if existSize > 0:
webFile.seek(existSize + 1) <--- Non-working piece
data = webFile.read(8192)
if not data:
break
outputFile.write(data)
n = n + len(data)
webFile.close()
outputFile.close()
Thanks,
Chris
More information about the Python-list
mailing list