[Numpy-discussion] size of arrays
robert.kern at gmail.com
Sat Mar 13 16:15:15 EST 2021
On Sat, Mar 13, 2021 at 4:02 PM <klark--kent at yandex.ru> wrote:
> Dear colleagues!
> Size of np.float16(1) is 26
> Size of np.float64(1) is 32
> 32 / 26 = 1.23
Note that `sys.getsizeof()` is returning the size of the given Python
object in bytes. `np.float16(1)` and `np.float64(1)` are so-called "numpy
scalar objects" that wrap up the raw `float16` (2 bytes) and `float64` (8
bytes) values with the necessary information to make them Python objects.
The extra 24 bytes for each is _not_ present for each value when you have
`float16` and `float64` arrays of larger lengths. There is still some
overhead to make the array of numbers into a Python object, but this does
not increase with the number of array elements. This is what you are seeing
below when you compute the sizes of the Python objects that are the arrays.
The fixed overhead does not increase when you increase the sizes of the
arrays. They eventually approach the ideal ratio of 4: `float64` values
take up 4 times as many bytes as `float16` values, as the names suggest.
The ratio of 1.23 that you get from comparing the scalar objects reflects
that the overhead for making a single value into a Python object takes up
significantly more memory than the actual single number itself.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion