[Tutor] Python Memory Allocation -- deep learning
steve at pearwood.info
Mon Jul 30 12:58:56 EDT 2018
On Mon, Jul 30, 2018 at 06:50:59PM +0530, Sunil Tech wrote:
> Hi Team,
> I am investigating how the memory allocation happens in Python
You cannot investigate memory allocation in Python code, because the
Python execution model does not give you direct access to memory.
What you can investigate is:
- when the particular interpreter you are using creates a new
object, or re-uses an existing object;
- when the particular interpreter you are using re-uses an
- the *approximate* size of an object in bytes.
Your example #1:
> >>> a = 10
> >>> b = 10
> >>> c = 10
> >>> id(a), id(b), id(c)
> (140621897573616, 140621897573616, 140621897573616)
tells you that the particular interpreter you are using happens to
re-use the int object 10. This is a version-specific,
interpreter-specific implementation detail, not a language feature.
Your example #2:
> >>> x = 500
> >>> y = 500
> >>> id(x)
> >>> id(y)
shows us that the particular interpreter you are using happens to *not*
re-use the int object 500, but create a new object each time it is
required. Again, this is not a language feature.
Your example #3:
> >>> s1 = 'hello'
> >>> s2 = 'hello'
> >>> id(s1), id(s2)
> (4454725888, 4454725888)
tells us that the particular interpreter you are using happens to re-use
the string object "hello", rather than create two different string
Again, this is an implementation feature, not a language feature.
Another interpreter, or a different version, might behave differently.
> >>> s3 = 'hello, world!'
> >>> s4 = 'hello, world!'
> >>> id(s3), id(s4)
> (4454721608, 4454721664)
And this tells us that the particular interpreter you are using
*doesn't* re-use the string object "hello, world!" in this context.
*Everything* you have seen in these examples, with one exception, are
implementation-specific and depend on the interpreter and the version
you use, and could change without notice.
The *only* thing you have seen which is a language feature is this rule:
- if two objects, a and b, have the same ID *at the same time*, then
"a is b" will be true;
- if "a is b" is false, then a and b must have different IDs.
In practice, most interpreters will follow rules something like this:
- small integers up to some convenient limit, like 20 or 100 or 256,
will be cached and re-used;
- integers larger than that will be created as needed;
- small strings that look like identifiers may be cached and re-used;
- large strings and those containing punctuation or spaces probably
won't be cached and re-used;
- the interpreter has the right and the ability to change these
rules any time it likes.
py> x = 1.5
py> y = 1.5
py> x is y # the float object *is not* re-used
py> x = 1.5; y = 1.5
py> x is y # the float object *is* re-used
> Python memory allocation is varying in all these use cases.
Nothing in any of those examples shows you anything about memory
allocation. It only shows you the existence of objects,
More information about the Tutor