[Python-bugs-list] [ python-Bugs-574132 ] Major GC related performance regression
noreply@sourceforge.net
noreply@sourceforge.net
Sun, 30 Jun 2002 11:01:15 -0700
Bugs item #574132, was opened at 2002-06-26 11:33
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=574132&group_id=5470
Category: Python Interpreter Core
Group: Python 2.3
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Kevin Jacobs (jacobs99)
Assigned to: Tim Peters (tim_one)
Summary: Major GC related performance regression
Initial Comment:
A change in the Python 2.3 CVS tree has resulted
in a major performance regression. It shows up
when allocating large lists of objects, but only when
the garbage collector is enabled. I've attached a simple
script that I use to measure the time it takes to
incrementally append a large number of objects into
a single Python list. For previous versions of
Python (up to and including 2.2.1), the time scaled
linearly with the number of elements added to the
list (or close enough for the list sizes considered).
However, with the Python 2.3 CVS version,
the time required to build the list scales quadratically.
Here is the core of my benchmark:
rows = []
start = time.time()
for i in indices:
rows.append( (1,) )
See the attached file for the full source as well
as timing results for several versions of Python
with and without garbage collection enabled.
----------------------------------------------------------------------
>Comment By: Tim Peters (tim_one)
Date: 2002-06-30 14:01
Message:
Logged In: YES
user_id=31435
Fixed, in gcmodule.c revision 2.46.
I still need to backport this to 2.2.1; another bug there
prevents the test case from showing the symptom.
----------------------------------------------------------------------
Comment By: Neil Schemenauer (nascheme)
Date: 2002-06-30 02:07
Message:
Logged In: YES
user_id=35752
Looks good. Go ahead and check it in or assign it back to
me and I'll do
it.
----------------------------------------------------------------------
Comment By: Tim Peters (tim_one)
Date: 2002-06-28 21:50
Message:
Logged In: YES
user_id=31435
Assigned to Neil. I've attached a brief patch that stops
move_root_reacahable() from pulling objects out of older
generations into the generation being collected, IOW, it
makes move_root_reachable restrict itself to moving
objects belonging to the generation being collected. This is
done without loss of speed (in fact, it's a little faster now).
This allows "the list" in Kevin's example to move to gen2
and stay there. Without the patch, it keeps getting yanked
back into gen1 because move_root_reachable() sucks it
out of gen1 (or gen2) and into gen0 every time gen0 gets
collected, "just because" a bound method object in gen0
points to the list.
Note that this didn't happen in 2.2.1 only because a bug in
2.2.1's PyCFunction_New caused bound method objects
not to get tracked at all.
----------------------------------------------------------------------
Comment By: Kevin Jacobs (jacobs99)
Date: 2002-06-28 10:50
Message:
Logged In: YES
user_id=459565
Here is a detailed analysis of the problem, using the following
kernel:
lst = []
for i in range(100000):
lst.append( (1,) )
The key ingredients are:
1) A method is called on a container (rather than
__setitem__ or
__setattr__).
2) A new object is allocated while the method object lives on
the Python
VM stack, as shown by the disassembled bytecodes:
40 LOAD_FAST 1 (lst)
43 LOAD_ATTR 3 (append)
46 LOAD_CONST 2 (1)
49 BUILD_TUPLE 1
52 CALL_FUNCTION 1
These ingredients combine in the following way to trigger
quadratic-time
behavior in the Python garbage collector:
* First, the LOAD_ATTR on "lst" for "append" is called, and
a PyCFunction
is returned from this code in descrobject.c:method_get:
return PyCFunction_New(descr->d_method, obj);
Thus, a _new_ PyCFunction is allocated every time the
method is
requested.
* This new method object is added to generation 0 of the
garbage
collector, which holds a reference to "lst".
* The BUILD_TUPLE call may then trigger a garbage
collection cycle.
* Since the "append" method is in generation 0, the
reference traversal
must also follow all objects within "lst", even if "lst" is in
generation 1 or 2. This traversal requires time linear in the
number of
objects in "lst", thus increasing the overall time complexity
of the
code to quadratic in the number of elements in "lst".
Also note that this is a much more general problem than this
small example.
It can affect many types of objects in addition to methods,
including
descriptors, iterator objects, and any other object that
contains a "back
reference".
So, what can be done about this.... One simple solution
would be to not
traverse some "back references" if we are collecting objects
in generation
0. This will avoid traversing virtually all of these ephemoral
objects that
will trigger such expensive behavior. If they live long enough
to pass
through to generation one or two, then clearly they should be
traversed.
So, what do all of you GC gurus think? Provided that my
analysis is sound,
I can rapidly propose a patch to demonstrate this approach if
there is
sufficient positive sentiment.
PS: I have not looked into why this doesn't happen in Python
2.2.x or
before. I suspect that it must be related to the recent GC
changes in
methodobject.py. I'm not motivated to spend much time
looking into
this, because the current GC behavior is technically
correct, though
clearly sub-optimal.
----------------------------------------------------------------------
Comment By: Jeremy Hylton (jhylton)
Date: 2002-06-27 00:10
Message:
Logged In: YES
user_id=31392
The first 4 run calls in this test spenned 70 percent of
their time in visit_move().
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=574132&group_id=5470