slowdown with massive memory usage

Hallvard B Furuseth h.b.furuseth at
Sun Aug 1 20:23:04 CEST 2004

Istvan Albert wrote:
>Hallvard B Furuseth wrote:
>> When I moved a function to create one such dict from near the beginning
>> of the program to a later time, that function slowed down by a factor
>> of 8-14: 38 sec at 15M memory usage, 570 sec at 144M, 330 sec at 200M.
> I suspect there is more to it than just "moving". There must be a reason
> for the reorganization and...

I ran it by hand since it was so slow - and then it wasn't slow.
So I timed it at different places in the program, and also checked
if it gave different results.  It didn't.

> check what other things are you doing and profile
> your program

Thanks.  I profiled the function with <hotshot.Profile>.runcall() and

The same functions are called, and they are called the same number of
times.  A total of 2872425 function calls.  Only the run times differ.

For example, this simple method (called 90440 times) slows down by a
factor of 7 according to the profiles:

  class PgNumeric:
    def __int__(self):
      return int(self.__v / self.__sf)

__v and __sf are longs, so there is little room to mess up that one:-)
Debug output shows the same sequence of input values in each run:
  self.__class__.__name__ = 'PgNumeric',
  self.__dict__ = {
    '_PgNumeric__v': <long integer increasing from 7735L to 260167L>,
    '_PgNumeric__p': 12,
    '_PgNumeric__sf': 1L,
    '_PgNumeric__s': 0}.

It "only" slows down by 30% if I add
    class PgNumeric(object):
        __slots__ = ('_PgNumeric__p',  '_PgNumeric__s',
                     '_PgNumeric__sf', '_PgNumeric__v')
but I don't know if such a change to pyPgSQL will be accepted, since
`object' disables the __coerce__() method.  Well, I'll try.


More information about the Python-list mailing list