[Web-SIG] Developer authentication spec

Graham Dumpleton graham.dumpleton at gmail.com
Tue Jul 14 01:59:04 CEST 2009

2009/7/14 Ian Bicking <ianb at colorstudy.com>:
> I wrote up a spec a while
> ago: http://wsgi.org/wsgi/Specifications/developer_auth
> The goal is a single way to indicate to debugging and developer tools when
> the developer is logged in.  This is orthogonal to normal application
> authentication.  It would control access to things like interactive
> debuggers, but could also be used to show information about template
> rendering, profiling, etc.  My goal in making this a specification is to
> encourage wider use of the technique in debugging tools (consumers), so they
> can use a consistent means of protecting private information or tools
> intended for developers.
> Since I wrote the spec I've written up an implementation:
> https://svn.openplans.org/svn/DevAuth/trunk
> Last time I brought this up there wasn't any response, but I'm hoping
> it'll... I dunno, make more sense or seem more interesting now.

For in browser debuggers, I think a rethink is needed as to how they
work. Currently they are only of use if the person who made the
request triggered the error and the debugger is enabled. This is
useless if you want to debug a problem that happened at an arbitrary
time through the actions of an arbitrary user and you don't have a
clue how to reproduce it.

What I have been wanting to do for a while and just haven't had the
time is to develop a debugging package which isn't inline with the
request, but is accessed through a special password protected URL,
with access restrictions on client IP as well if you want it. The
intent was that this be much more than just a code debugger, but a
portal into looking at lots of different performance characteristics
of an application.

The idea for debugging of exceptions is that you would enter this
portal and define some criteria as to what you want to capture. For
example, you might already know that the problem always occurs when
accessing certain URLs. As such, you would set up a definition that
says that if an unexpected exception occurs when accessing application
through some parent URL, that the exception be caught and the state
retained, with the client just getting back the normal 500 error page
or other response as appropriate. Because we are going to park the
state of the request, we would also define a limit as to how many such
events you want to keep as don't want to blow out memory usage or
locks on pooled resources. For case where event occurs, you could also
be emailed about it so you know about it.

As to debugging the problem, later on, or when you get the email, you
would login to your debugging portal and view a list of the parked
failed requests. You would then select which event you want to debug
and would attach to the parked state for that request and the
exception state. You would then work in a similar way to the current
in browser debuggers. The difference though is that you are debugging
an event that happened some time earlier and which was triggered by
someone you may not even know. In other words, it is giving you a
postmortem facility working on actual data from time of request.

This debugging portal could also be expanded to get access to various
other information about an application as well using a web page. This
is in contrast to a lot of existing stuff which might just dump the
information to log files and expect you to look there.

For example, you might dynamically through the portal say you want to
run profiling of requests against a specific URL. Again, to limit how
much data you capture, you might say to do it only once, a small set
number of times, or for a period of time. The profiling data for this
might be retained in memory, or written to disk, but either way, the
web interface provided by the portal would be the means of viewing it.

Other things that the debugging portal could provide is:

- Ability to capture request and response details, optionally
including request and response content, for later analysis or replay.
- Recording time taken for specific requests.
- For a multithreaded process, capture some metrics on thread pool
utilisation. That is, the average and maximum number of threads being
used over time. The intent here being used to determine how many
threads are actually required to handle your request load so that can
drop number of threads and reduce memory usage.
- Track requests/second.
- etc etc etc

With all these, they should be able to applied to all or sub sets of
URLs. You should be able to specify some criteria as to how long you
run the data probe, ie., number of requests, set time, forever. You
should perhaps be able to filter based on IP of remote client etc etc.

The final thing is that these shouldn't be developed as lots of
separate applications, but a framework constructed which would allow
different developers to implement them as component instruments that
could then be added as plugins into the debugging portal and mesh
nicely with the overall web page theme for the portal.

The intent with this debugging portal is that it could always be a
part of a production application. Most of the time it would be off to
the side, idle and having negligible if any performance impact. When
you do have a problem, then you would go set up the probe you want run
and leave it. You then come back when you want and review the results.
For some types of probes the overhead may be minimal enough that you
could even leave them on all the time,

Because it would be part of the production application, the debugging
framework itself should not be too heavy weight, so, it should not be
dependent on running some large high level framework. Instead, it
should be quite small and based on WebOb or similar. Because of its
simplicity, bobo might be a good choice for URL dispatch, for which
you get WebOb anyway. For templating where necessary, not sure.
Personally, I actually think it should try and use AJAX and push the
application interfaces back into Javascript on the browser, with
remote calls using JSON to interact with the actual engine. Pyjamas
could be a good match for developing the interface.

What problems would there be. Well, the main one is that due to
retained state being kept for debugging sessions, WSGI application
needs to be run in one process. So, may not be as useful for
multiprocess deployments. Even then, using stick sessions/session
affinity using cookies one might be able to at least monitor a
selected process in a multiprocess configuration. You might even build
on the deployment mechanisms to some how dynamically skew traffic
towards that specific process when you do want to do some data

Anyway, lots of ideas but no time. I had sort of seen this as a
project I might develop myself, but due to that lack of time, maybe
the above description might inspire others to see what it is I am
thinking of and the possible utility of it. As such, maybe some of you
out there may be interested in doing this as a group project. If not,
I'll park the idea again and come back to it when I have time.


More information about the Web-SIG mailing list