missing? dictionary methods

Antoon Pardon apardon at forel.vub.ac.be
Wed Mar 23 10:50:55 CET 2005


Op 2005-03-22, Bengt Richter schreef <bokr at oz.net>:
> On 22 Mar 2005 07:40:50 GMT, Antoon Pardon <apardon at forel.vub.ac.be> wrote:
> [...]
>>I also was under the impression that a particular part of
>>my program almost doubled in execution time once I replaced
>>the naive dictionary assignment with these self implemented
>>methods. A rather heavy burden IMO for something that would
>>require almost no extra burden when implemented as a built-in.
>>
> I think I see a conflict of concerns between language design
> and optimization. I call it "arms-length assembler programming"
> when I see language features being proposed to achieve assembler-level
> code improvements.
>
> For example, what if subclassing could be optimized to have virtually
> zero cost, with some kind of sticky-mro hint etc to the compiler/optimizer?
> How many language features would be dismissed with "just do a sticky subclass?"

I'm sorry you have lost me here. What do you mean with "stick-mro"

My feeling about this is the following. A[key] = value,
A.reset(key, value) and A.make(key, value) would do almost
identical things, so identical that it would probably easy
to unite them into something like A.assign(key, value, flag)
where flag would indicate which of the three options is wanted.

Also a lot of this code is identical to searching for a key.
Now because the implemantation doesn't provide some of the
possibilities I have to duplicate some of the work.

One could argue that hashes are fast enough so that this
doesn't matter, but dictionaries are the template for
all mappings in python. What it you are using a tree
and you have to go through it twice or what if you
are working with slower mediums like with one of
the dbm modules where you have to go through your
structure on disk twice.

You can see it as assembler-level code improvements, but
you also can see it as an imcomplete interface to your
structure. IMO it would be like only providing '<'
and if people wanted '==' they would have to implement
that like 'not (b < a or a < b)' and in this
case too, this would increase the cost compared with
a directly implemented '=='. 


>>But you are right that there doesn't seem to be much support
>>for this. So I won't press the matter.
> I think I would rather see efficient general composition mechanisms
> such as subclassing, decoration, and metaclassing etc. for program elements,
> if possible, than incremental aggregation of efficient elements into the built-in core.
>
> Also, because optimization risks using more computation to optimize than the expression
> being optimized,

I think this would hardly be the case here. The dictionary code already
has to find out if the key is already in the hash or not. Instead of
just continuing the branch it decided on as is now the case, the code
would test if the branch is appropiate for the demanded action
and raise an exception if not.

-- 
Antoon Pardon



More information about the Python-list mailing list