[ python-Bugs-1338264 ] Memory keeping
SourceForge.net
noreply at sourceforge.net
Mon Oct 31 21:00:50 CET 2005
Bugs item #1338264, was opened at 2005-10-26 05:37
Message generated for change (Comment added) made by tim_one
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1338264&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.4
>Status: Closed
>Resolution: Wont Fix
Priority: 5
Submitted By: sin (sin_avatar)
Assigned to: Nobody/Anonymous (nobody)
Summary: Memory keeping
Initial Comment:
I execute this code on python 2.4.2 (authentic copy from
console):
Python 2.4.2 (#1, Oct 26 2005, 14:45:33)
[GCC 2.95.4 20020320 [FreeBSD]] on freebsd4
Type "help", "copyright", "credits" or "license" for more
information.
>>> a = range(1,10000000)
>>> del a
before i type del - i run top and get (see console output
below):
16300 sin 2 0 162M 161M poll 0:02 35.76%
9.28% python2.4
after del (console below):
16300 sin 2 0 162M 161M poll 0:03 7.18%
6.05% python2.4
I tried gc too ... but python didn't free memory. I checked
this on windows - memory was freed, but interpreter with
0 defined variables "eat" about 75 Mb!. I think this is bug
in interpereter core.
some text from dmesg for you:
Copyright (c) 1992-2003 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991,
1992, 1993, 1994
The Regents of the University of California. All rights
reserved.
FreeBSD 4.8-RELEASE #0: Thu Apr 3 10:53:38 GMT
2003
root at freebsd-stable.sentex.ca:/usr/obj/usr/src/sys/
GENERIC
Timecounter "i8254" frequency 1193182 Hz
CPU: Pentium III/Pentium III Xeon/Celeron (499.15-MHz
686-class CPU)
Origin = "GenuineIntel" Id = 0x673 Stepping = 3
Features=0x387f9ff<FPU,VME,DE,PSE,TSC,MSR,
PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PAT,
PSE36,PN,MMX,FXSR,SSE>
real memory = 268369920 (262080K bytes)
avail memory = 255901696 (249904K bytes)
----------------------------------------------------------------------
>Comment By: Tim Peters (tim_one)
Date: 2005-10-31 15:00
Message:
Logged In: YES
user_id=31435
sin_avatar, it's the number of integer objects _simultaneously
alive_ that matters, not the total number of integer objects
ever created. Creating a list (or any other in-memory
container) containing millions of integers can be a bad idea
on many counts.
I'm closing this as WontFix, as there are no plans to replace
the integer freelist. I think it would be good if an upper bound
were placed on its size (as most other internal freelists have),
but that's trickier than it sounds and there are no plans to do
that either.
> I can't udestood why you don't care about this?
It's a tradeoff: it's much easier to avoid in real life than it is to
change the implementation in a way that would actually help
(the int freelist is an important speed optimization for most
integer-heavy apps). Changing your coding practices to live
with this is your only realistic hope for relief.
BTW, the idea that you might really need to create a list with
10 million integers in each thread of a threaded server is too
bizarre to argue about ;-)
----------------------------------------------------------------------
Comment By: Josiah Carlson (josiahcarlson)
Date: 2005-10-31 14:11
Message:
Logged In: YES
user_id=341410
Integers are immutable. That is, each integer with a
different value will be a different integer object (some
integers with the same value will also be different
objects). In CPython, id(obj) is the memory location of
that object.
>>> a = 9876
>>> id(a)
9121340
>>> a += 1
>>> id(a)
8738760
It is assumed by the CPython runtime, based on significant
experience by CPython programmers, that if you are using a
large number of integers now, that you are probably going to
use a large number of integers in the future.
When an integer object is allocated, the object hangs around
for as long as it is needed, and when it is no longer
needed, it is placed on a list of allocated integer objects
which can be re-used, but which are never freed (the integer
freelist). This allows Python to allocate blocks of
integers at a time when necessary (which speeds up
allocations), re-use unused integer objects (which removes
additional allocations in the majority of cases), and
removes the free() call (on many platforms, free() doesn't
work).
Integer freelists are not going away, and the behavior you
are experiencing is a result of integer freelists (though
the 163M rather than 123M memory used is a result of
FreeBSD's memory manager not being perfect).
As Tim Peters suggested, if you need to iterate through 10
million unique integers, do so via xrange, but don't save
them all. If you can't think about how to do such a thing
in your application, you can ask users on the
comp.lang.python newsgroup, or via the
python-list at python.org mailing list, but remember to include
the code you are using now and a description of what you
want it to do.
----------------------------------------------------------------------
Comment By: sin (sin_avatar)
Date: 2005-10-31 04:06
Message:
Logged In: YES
user_id=1368129
One more time.
>>> a = [1 for i in xrange(10000000)]
>>> del a
This code clear - and this not raise "memory keeping"(!)
>>> a = [i for i in xrange(10000000)]
>>> del a
But this code take about 163 Mb on my freeBSD. I can't
udestood why you don't care about this?
P.S. somebody explain me phrase "a) not a bug (integer
freelists)"
----------------------------------------------------------------------
Comment By: Josiah Carlson (josiahcarlson)
Date: 2005-10-31 01:56
Message:
Logged In: YES
user_id=341410
Suggested close because and/or:
a) not a bug (integer freelists)
b) platform specific malloc/free behavior on the
list->ob_item member (some platforms will drop to 121M
allocated memory after the deletion)
c) OP didn't listen when it was suggested they use xrange()
instead of range()
----------------------------------------------------------------------
Comment By: sin (sin_avatar)
Date: 2005-10-31 01:15
Message:
Logged In: YES
user_id=1368129
Certainly, i 'am not a C guru, but i uderstood - if interpreter
keep more than 100Mb, and not keep useful information - it's
suxx. Fore example if i use my script as a part Zope portal - it
would be awful. Certainly my script was just example - but if i
use mult-thread server wrote on python and create list in each
thread - i would take memory from system and i cannot give it
back.
----------------------------------------------------------------------
Comment By: Tim Peters (tim_one)
Date: 2005-10-27 15:38
Message:
Logged In: YES
user_id=31435
Space for integer objects in particular lives in an immortal
free list of unbounded size, so it's certain in the current
implementation that doing range(10000000) will hang on to
space for 10 million integers forever. If you don't want that,
don't do that ;-) Iterating over xrange(10000000) instead will
consume very little RAM.
----------------------------------------------------------------------
Comment By: Josiah Carlson (josiahcarlson)
Date: 2005-10-27 15:29
Message:
Logged In: YES
user_id=341410
>From what I understand, whether or not the Python runtime
"frees" memory (which can be freed) is generally dependant
on platform malloc() and free().
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1338264&group_id=5470
More information about the Python-bugs-list
mailing list