[Web-SIG] WSGI 2.0

Graham Dumpleton graham.dumpleton at gmail.com
Fri Apr 6 08:41:29 CEST 2007

On 06/04/07, James Y Knight <foom at fuhm.net> wrote:
> On Apr 5, 2007, at 10:52 PM, Graham Dumpleton wrote:
> > On 06/04/07, James Y Knight <foom at fuhm.net> wrote:
> >> What's the point of a switch? If the app didn't provide a content-
> >> length, and you can't otherwise determine a content-length (because
> >> for example the result is a simple list of strings), just do chunked
> >> encoding output automatically.
> >
> > To a degree that may make sense, but to my mind, the implementation of
> > a lower level layer beneath the application, which an application
> > writer may have little real control over, should not really be making
> > an arbitrary decision on behalf of the user to impose such a
> > behaviour. It should always be up to the application writer to make
> > the decision.
> > That said, it may be worthwhile to actually implement the directive to
> > honour Off (default), On or Auto. Thus On is always use chunked
> > transfer encoding and Auto would be what you describe. Which approach
> > is used though should still be the application writers choice.
> But you didn't answer the question: what is the _point_? When do you
> ever want to require a connection close rather than use chunking
> ("Off")? Why would you ever want to use chunking if you already know
> the content length ("On"). Those switches seem wholly useless.
> No addition or extension to WSGI needed...

Am I take this then that you believe or are proposing that WSGI 2.0
should require that if no content length is provided in a response and
it can' be calculated, that a WSGI adapter should ensure the web
server uses chunked transfer encoding? Since the original point of
this message thread was to determine what WSGI 2.0 should be, it would
help if you are clear on what you believe should be specified by WSGI
2.0 in this respect.

Personally I don't understand why having a choice is a problem. Since
the default for Apache and probably many other web servers is not to
use chunked transfer encoding, it would seem logical that one still
has a choice that chunked transfer encoding not be used. The logical
opposite to off is for chunked transfer encoding to always be used. As
you have pointed out there is also a logical middle ground where the
content length is actually provided. Why shouldn't a user have a
choice as to how they want it to behave. A user may have very good
reasons that they want it to behave in a certain way and as a provider
of a piece of middleware software I should be allowing them access to
all options. I should not be limiting choices.

Thus from where I stand I don't see that there is a need to answer the
questions as I am not the end user and can't say why a user would want
to do it a specific way nor rule out that any particular way may be

Anyway, the whole point of adding the option in the first place was
merely to get around the current shortcoming of WSGI 1.0 that it
doesn't define whether chunked transfer encoding should be used nor
provide a way for an application to control it. Thus it was
convenience for users who felt they needed to be able to control this.
I could have just said 'touch luck' and tell users to wait until WSGI
2.0 is out and says something about it. I guess one can't make
everyone happy though, but if this is going to be seen as such a big
issue, I'd sooner remove the option totally and users can just put up
with the limitation as it stands with WSGI 1.0 of not being able to
use chunked transfer encoding.


More information about the Web-SIG mailing list