robin at jessikat.fsnet.co.uk
Sun Jun 30 04:58:26 EDT 2002
In article <7xit41i7i9.fsf at ruckus.brouhaha.com>, Paul Rubin <phr-
n2002b at NOSPAMnightsong.com> writes
>Robin Becker <robin at jessikat.fsnet.co.uk> writes:
>> the generators here are things associated with compression or
>> decompression methods. The complexity is such that a severe
>> reformulation will be tough. I understand the concept, but I think it's
>> easier to just try and consume the whole input at one go and pass it on.
>large enough inputs that memory consumption is a problem, you probably
>your program will probably be near unuseable. What's the JS
>environment and the application? If it's client side JS in a web
>browser, you're probably using the wrong approach to the problem. If
>it's something like Netscape server side JS, use the LiveConnect stuff
>to implement the compression in C.
that it's easy to set the following headers+content in the server
and have most modern browsers accept and decode properly. Phew, maybe I
However the content.gz data has to be exactly compatible with gzip. My
experiments with Python-2.1/zlib on freeBSD give me a compressed string
that's slightly too long for some reason. As an example gzip --> 14099
bytes while zlib.compress-->15008 bytes.
Anyone know what I need to do to make zlib work properly in this
context? Or should I really be using the gzip module?
More information about the Python-list