[Patches] [ python-Patches-1616125 ] Cached globals+builtins lookup optimization

SourceForge.net noreply at sourceforge.net
Sat Dec 23 23:52:35 CET 2006


Patches item #1616125, was opened at 2006-12-15 01:44
Message generated for change (Comment added) made by ag6502
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=305470&aid=1616125&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Performance
Group: Python 2.6
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Andrea Griffini (ag6502)
Assigned to: Nobody/Anonymous (nobody)
Summary: Cached globals+builtins lookup optimization

Initial Comment:
This patch implements a speed optimization by introducing a timestamp on dictionaries and by caching lookups of constants so that lookup doesn't need to be repeated if the dictionary didn't change.

Currently the patch implements the cached lookup only for the LOAD_GLOBAL opcode and stores the cache as two extra member in the code object. I'm going to investigate if LOAD_ATTR in the special case of a module is worth caching too.

A cache entry for LOAD_GLOBAL is defined by two timestamps (Py_ssize_t) and a *borrowed* pointer to a PyObject. The use of a borrowed pointer is safe because the value is used only if the dictionaries didn't change, so the result of lookup must still be alive because the reference in the dictionary that returned the cached value it is still in place.

The assumptions are:
- a LOAD_GLOBAL opcode always looks for the same
  symbol (i.e. co_names never changes)
- the dictionary being searched (globals and
  builtins) are mostly constant

On my system I got a speedup of 8%-10% on a real program (a tcp/ip server that handles an in-memory database of python objects) on the checkpoint load + rollforward steps (a few millions LOAD_GLOBAL operations in total) and the varying percentage depends on which compiler options are used to build python.

On another little test program that calls a function

def f(x):
    return math.sin(x) + math.cos(x) + abs(x)

one million times the speedup with the default makefile
is about 7%.

To enable the patch the makefile should define the CACHED_LOOKUPS symbol.

The change to dictobject also exports the timestamp, at the python level; other changes are instead invisible at the python level. I'm not sure at all exporting dict.timestamp() is a good idea however...

Note that after applying the patch you may find an error in userdict if you simply use "make" (there's probably some dependency problem somewhere).
After make clean + make the test suite runs fine on my system.

I'm interested in hearing if also other platforms do actually see a real performance improvement.

Andrea

----------------------------------------------------------------------

>Comment By: Andrea Griffini (ag6502)
Date: 2006-12-23 23:52

Message:
Logged In: YES 
user_id=1668946
Originator: YES

I was already changing the code in the direction you're pointing out. I
attached the current version (still not complete!) of the patch; more
specifically:
- Cached lookups are now stored in an array of structures instead than in
two separate arrays
- Macros are used for handling of cache entries
- The timestamp is now unsigned and there is optional support for 64-bit
timestamps
- Support for stalling timestamping at 32 bit wrap point
- Removed the LOAD_FAST_HACK (was just an experiment with oprofile; was
left there by mistake)
- Fixed memory allocation failures handling in code object creation
- Removed exporting of timestamp value at the python level
- Used as cache size the largest LOAD_ATTR/LOAD_GLOBAL entry found

Still missing are the sorting of co_names to pack cache slots and
the cache reset at gc time if the dictionary timestamp is stalled.

The cached lookup can be used at two levels:
-DCACHED_LOOKUPS enables caching of LOAD_GLOBAL results
-DCACHED_MODULE_LOOKUPS (in addition to -DCACHED_LOOKUPS) also caches the
result of a lookup in a module.

Speed results are sometimes very interesting and sometimes strange; I
found measurable differences just moving around statements between
logically equivalent places after checking what gcc was doing with
register allocation (probably those places were indeed not equivalent if
taking in account aliasing that isn't going to happen but that a c
compiler must assume as possible). I have no idea if the speedups I've
measured are better or worse on other processors/architectures.

File Added: cached_lookups_10.patch

----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-12-23 04:33

Message:
Logged In: YES 
user_id=33168
Originator: NO

I only reviewed the code in the patch.  I have not thought conceptually
about this, whether it's a good idea, how to break it, etc.  I also did
not look to verify all the necessary dict methods were updated to add
TOUCHED.

Have you profiled the slow down to dicts in normal cases?  Things like:

   d = {}
   d[0] = 0

etc?  All dict operations are going to be a tiny bit slower due to an
additional assignment.  It's probably not measurable, but you should still
verify this.

What is LOAD_FAST_HACK in ceval.c?  It doesn't seem necessary for this
patch.
Please revert any whitespace/formatting changes that aren't part of the
patch (there are a couple in dictobject.c and codeobject.c).

It seems like a little structure would be better than parallel arrays. 
This would reduce some calculations and reduce the number of mallocs. 
Also there would only be a single assignment in the eval loop and the
names would probably wind up being more clear.

Would it be better to use PyObject_Malloc rather than PyMem_Malloc?  It
should be faster for sizes <= 256.  (Despite the name, it doesn't have to
handle PyObjects.)

Why not use a size_t instead of Py_ssize_t for the timestamp?  This is
conceptually unsigned, no?

Use the macro version PyTuple_GET_SIZE(names); in codeobject.c.  It will
be a little faster.

If the cache allocs fail when allocating a code object, ceval will crash
due to not checking for NULL.

As for the timestamp, there should probably be a macro to access the field
for a dict.  This macro should be used in ceval.c/codeobject.c for
getting/setting it.

When returning the timestamp in dictobject.c you shouldn't cast it to a
long.  Use the appropriate method such as PyInt_FromSsize_t.

----------------------------------------------------------------------

Comment By: Martin v. Löwis (loewis)
Date: 2006-12-16 23:57

Message:
Logged In: YES 
user_id=21627
Originator: NO

I think restricting yourself to 32-bit counters is fine (it's also an
issue of memory overhead as you have one of these per access to a global,
and per dictionary); you have to deal with the overflow, though. I suggest
to use an unsigned value, e.g. size_t.

Overflowing should be dealt with through visiting all objects, as you say.
It could either happen right when it overflows, or it could get integrated
into GC, and only reset when the GC runs. You'd then need to account for
those periods when it already has overflowed, but GC hasn't run yet; in
these periods, counting should be "on hold".

----------------------------------------------------------------------

Comment By: Andrea Griffini (ag6502)
Date: 2006-12-16 18:13

Message:
Logged In: YES 
user_id=1668946
Originator: YES

I am experimenting with long long timestamps (based on "unsigned long
long" if available or explicitly on two "unsigned long" otherwise). An
alternative to slowing down lookups by using 64 bits on 32-bit computers
is just using a single 32-bit counter and trigger a "cache flush" when the
timestamp rolls over by visiting all live dictionaries and code objects.
This would be IMO a *very* infrequent operation (for example on my
P4/2.4GHz just doing nothing else but dict updates getting to 2**32
updates would require 14 minutes).

----------------------------------------------------------------------

Comment By: Martin v. Löwis (loewis)
Date: 2006-12-16 12:01

Message:
Logged In: YES 
user_id=21627
Originator: NO

How does that deal with timestamp wraparound?

----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=305470&aid=1616125&group_id=5470


More information about the Patches mailing list