Automatically resume a download w/ urllib?

Chris Moffitt chris_moffitt at
Sun Oct 21 04:02:52 CEST 2001

I'm trying to develop a small application (mostly for Windows if that 
matters), than can download and install binary files stored on a web server.

I've been able to get urllib to download the file just fine, but what I 
would like to do is be able to resume a download if it is cancelled for 
some reason.  In other words, if I've downloaded 50k of a 200k file, I'd 
like to just download the remaining 150k.

Is this possible using HTTP and Apache?  If not, what about FTP?  If 
neither of those, does that mean I need to write my own pseudo-server of 
some sort that can spool out the file appropriately?

If it helps, here's my non-working code (ripped from the Python Std 
Library) that contains a seek statement.

import urllib, os

dlFile = "sm.tar.gz"
webFile = urllib.urlopen("")

existSize = 0
if os.path.exists(dlFile):
    outputFile = open(dlFile,"ab")
    existSize = os.path.getsize(dlFile)
    outputFile = open(dlFile,"wb")

n = 0

while 1:
    if existSize > 0: + 1)   <--- Non-working piece
    data =
    if not data:
    n = n + len(data)


More information about the Python-list mailing list