httplib and large file uploads

Jesse Noller jnoller at gmail.com
Mon Oct 2 19:35:28 CEST 2006


Hey All,

I'm working on an script that will generate a file of N size (where N is
1k-1gig) in small chunks, in memory (and hash the data on the fly) and pass
it to an httplib object for upload. I don't want to store the file on the
disk, or completely in memory at any time. The problem arises after getting
the http connection (PUT) - and then trying to figure out how to
iterate/hand the chunks I am generating to the httplib connection's send()
call. For example (this code does not work as is):

chunksize = 1024
size = 10024

http = httplib.HTTP(url, '80')
http.putrequest("PUT", save_url)
http.putheader("Content-Length", str(size))
http.endheaders()

for i in xrange(size / chunksize):
    chunk = ur.read(chunksize)
    http.send(chunk)

errcode, errmsg, headers = http.getreply()
http.close()


In this case, "ur" is a file handle pointing to /dev/urandom. Obviously, the
problem lies in the multiple send(chunk) calls. I'm wondering if it is
possible to hand
http.send() an iterator/generator which can pass chunks in as needed.

Thanks in advance,
-jesse
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20061002/4492c488/attachment.html>


More information about the Python-list mailing list