doing tricks with refcounts

Richard B. Kreckel Richard.Kreckel at GiNaC.DE
Wed Sep 26 08:18:30 EDT 2001


Thanks very much, indeed, to Henry, Terry and Duncan for providing
valuable feedback and insights (some of them by email).

In comp.theory Duncan Booth <duncan at nospamrcp.co.uk> wrote:
: The question that the original poster has to answer is what the probability 
: is that two complex objects, created independently end up with identical 
: state. 

This question is, of course, impossible to answer without an explicit
problem and a solution in terms of implementation at hand.  I'll still
try to do so.

What I had in mind was Computer Algebra rather than the design of
general-purpose languages.  There, independently created complex
objects that end up with identical state pop up frequently.  As a
simple example, consider this, given a symbol x:
  A = x^2-1;
  B = expand((x-1)*(x+1)); 
B is equal to A, but there is no way the system can know that without
carrying out the expansion of the polynomial (x-1)*(x+1).  And in more
complicated examples, there is no way for the user/programmer to know.
Having to scan the entire pool of allocated expressions A, B, C,...
each time a new one is constructed seems impractical, too, since it
requires us
a) to keep a pool of weak references somewhere in a global memory area
   and add to / delete from it and
b) look up expressions in there, wich is of logarithmic complexity but
   still very costly.

I have now played for a while with this in a little project of my own,
the GiNaC system for symbolic computation in C++.  In typical
computations arising there, I have found 10-15% less memory
consumption and 5-10% speedup.  I had expected to see more, but well.
(Hmm, I begin to realize that the fact that our comparison is further
aided by hash-values keeps the scheme from really kicking in.)

There is a seemingly little paradox in here: It is the comparison
function (think of it as '__cmp__' or Perl's starship operator '<=>')
that becomes much more complicated and slows things down.  But it is
the same function that speeds things up because some later comparisons
can be answered by just comparing pointers!

:        If the objects aren't complex (or large) then you haven't gained 
: much, if they aren't created independently then they should already refer 
: to the same object. I would think that in most situations you would get 
: almost all of the benefit by simply applying Python's rule of requiring 
: copy to be explicit.

If the objects aren't complex then we haven't gained much on a
per-object basis.  But for lots of them we might gain something.
Sometimes, even atomic objects being reference-counted have a larger
memory-footprint than the thingies handling them since they have to
account for the refcount-variable and possibly more.  Again, it
depends on the application we are having in mind.

: Another assumption that is being made here is that the b objects are 
: effectively immutable (if either can change in the future then you cannot 
: do this optimisation).

If they can change in the future we still can do this optimization,
but it gets very costly afterwards.  Then we have to split them again.
("do fission").  The question is, hence, a statistical one: How often
can we fuse?  How often do we have to split?

: If I had a situation where all of these conditions appeared to hold, then I 
: would probably try to ensure that I either avoided creating the duplicate 
: objects in the first place, or as soon as they were created collapsed them 
: together. In the Python universe I would implement this with a dictionary 
: using the objects or other suitable value as keys, so that whenever a new 
: object is to be created it is easily possible to check whether a matching 
: one already exists.

This is another statistical question: If creation of such objects is
*very* frequent, your approach is going to be way too costly.  I am
thinking of intermediate expressions in typical Computer Algebra
computations.

Henry wrote:
> As to doing it in comparisons, that would often still cost big-time.  For one
> thing,  most  of  the  time  comparisons  don't  have  to  look  at an entire
> structure.

This is a very valid objection.  However, I am under the impression
that it misses one critical point.  The scheme I suggested does not
interfere with comparisons that are not deep because they can be
answered early.  A simple list suffices to see this.  Let
  A = { a, b, c };
  B = { a, d, c };
where we assume that both 'a' are a priori non-fused(*).  When
comparing 'A' and 'B' we start from the beginning and see that the
first elements are equal and since the fusion was hidden in the
comparison function that detected this equality, they are subsequently
fusion.  Next 'b' and 'd' are found to be different whereat the
comparison of 'A' and 'B' can return false.  We have only had a
partial effect here, since the two occurrences of 'c' are still
unshared, if they haven't been unshared before.  But since comparison
is the one crucial operation performed over and over in such systems,
equal objects are likely to become depleted over time.

Regards
     -richy.

(*) I'm still fishing for words: I don't like 'to share' because of my
    German background.  The obvious translation would be 'teilen'.  That, 
    however, is semantically overloaded with 'to divide'.   :-(
-- 
Richard B. Kreckel
<Richard.Kreckel at Uni-Mainz.DE>
<http://wwwthep.physik.uni-mainz.de/~kreckel/>




More information about the Python-list mailing list