gert.cuykens at gmail.com
Fri Jul 24 20:10:17 CEST 2009
On Jul 24, 7:32 pm, "Diez B. Roggisch" <de... at nospam.web.de> wrote:
> gert schrieb:
> > this is a non standard way to store multi part post data on disk
> > def application(environ, response):
> > with open('/usr/httpd/var/wsgiTemp','w') as f:
> > while True:
> > chunk = environ['wsgi.input'].read(8192).decode('latin1')
> > if not chunk: break
> > f.write(chunk)
> > response('200 OK',)
> > return ['complete']
> > my question is how do i handle the file, so i can shuffle it into a db
> > using small chunks of memorie ?
> I don't think that's possible with the current DB-API. There is no
> stream-based BLOB-interface (as e.g. JDBC offers).
> So the answer certainly depends on your used RDBMS. For oracle, you
> would be lucky:
> Other adapters I don't know about.
sqlite :) ok let say for now it would be impossible on a db level, but
before i reach the impossible, i still need to parse the file to
prepare the chunks. How do i do that ? How do i get the chunks without
loading the hole file into memorie ?
b = environ['CONTENT_TYPE'].split('boundary=')
data = search(b+r'.*?Content-Type: application/octet-stream\r\n\r
data = data.encode('latin1')
More information about the Python-list