[py-dev] py.log suggestion
pinard at iro.umontreal.ca
Sun Nov 13 05:36:18 CET 2005
Hi, Holger! Are you back, or still enjoying the travel? :-)
>> > So, in short, i propose to treat the case of the first arg being
>> > a callable, simply as:
>> > if consumer is not None:
>> > if args and callable(args):
>> > args = args(*args[1:], **kwargs)
>> > kwargs = None
>> > ...
>> It surely solves the initial problem, anyway, which is to have some way
>> to conditionalise out semi-lengthy computations whose sole purpose is to
>> generate the text of log messages.
Well, in the big program we've started cleaning a few months ago, and
which I'll surely continue cleaning for months, the above would take
care of most situations I saw so far. Yet, I found a case where it
would not be sufficient, because the fact that logging will occur or not
changes the algorithm long in advance. That case is sketched at the end
of this message, in case it helps figuring out what I mean :-).
However, you know, this is only a seek for elegance. The program
already accepts a flurry of options for controlling lists and traces of
all kinds, and I'm trying to get rid of many of these options by
sticking to the py.log paradigm instead. For a few unusual cases,
I could either keep some of these options, or add interceptions in
the place where py.log consumers are decided for setting a few flags to
be tested later. There is no real problem, and I'm a reasonable man!
Yet, the extra flags are conceptually unneeded, if py.log offered an API
for testing a particular trace is active or not. If easily doable, the
nicest that come to mind is still using the logger in boolean context.
The idea of the first argument being callable is attractive indeed, and
I really find the approach much nicer than salting the source with a lot
of extraneous tests. Yet, there are a few cases where I think it would
be nice to test, as elegantly as possible, if a logger has a consumer.
>> And there is this unexpected danger as well, which you underlined in one
>> of your replies, and which I did not foresee (I should probably
>> have), that mylog may receive a first argument which is unexpectedly
>Right. So i have a new suggestion.
> log.info[arg1, arg2, ...]
>would behave exactly like the log.info(arg1, arg2, ...) does now,
>i.e. a dumb "print" emulation. No special check for callables,
>formatting and such.
Not bad! I like this! :-)
>But then we can put the "call function" syntax to new usage. So with
> log.info(*args, **kwargs)
>we would now have special treatment (rough sketch):
> if consumer is not None:
> if callable(args):
> consumer(args(*args[1:], **kwargs))
> if kwargs:
> assert len(args) == 1, "..."
> consumer(args % kwargs)
> assert not kwargs, "..."
> consumer(args % args[1:])
>this should IMO allow a rather intuitive usage for percent-formatting
>as well as for defering formatting to custom callables.
Agreed, and quite interesting.
>This is unfortunately a somewhat incompatible change to current usage
>but well, it's not used that much yet and we probably have enough
>access to the code bases that currently use it.
It surely would not be a practical problem for me / us, in any case.
Here is, for the curious, a short description of the case where I would
need to test if a particular trace is active or sent to /dev/null, and
where giving a callable to a py.log logger would not be a solution.
During its initialisation, the application builds an internal tree from
a parameter file using a compact format. This file, without being huge,
is not small. Later in the application, this tree (or parts thereof) is
repeatedly "evaluated", given varying evaluation contexts. In a typical
run, such evaluations occur a million times or so, and for some of these
evaluations, hundreds of nodes need to be visited. So, special
attention is given for the evaluation to be done efficiently, and the
digested tree gets "optimized" in many ways before being used.
Jacques, who writes the parameter file (that one digested into a tree)
sometimes needs to understand why or how a particular decision is
reached by the evaluator (routinely called "the engine", here). So he
asks the operator to launch the application over a fairly limited subset
of the whole input and to activate the "explanation mode" of the engine.
In this mode, whenever a tree evaluation occurs, a copy of the big tree
is recreated, yet massively pruned from all subtrees which did not
participate in that particular evaluation, in the given context. That
subtree is then printed for later study and analysis, together with
related trace information. The trace itself contains textual reference
locators, already built into the tree by the parser/scanner pair
digesting the parameter file.
The key here is that in "explanation mode", a lot of shortcuts and
optimisations are inhibited, so the trace gets more decipherable,
and the re-built trees cleaner. Performance considerations forces the
application to use a wholly rewritten evaluator in "explanation mode",
so the normal evaluator is not slowed down by tests, traces, and
sub-tree rebuilding. The selection of the proper evaluator depends on
if "explanation mode" is wanted or not. Despite a lot of mechanics is
involved for this, I would like to see it, at least operationally, as
nothing more than just another mere trace, among many other mere traces.
François Pinard http://pinard.progiciels-bpi.ca
More information about the Pytest-dev