Program uses twice as much memory in Python 3.6 than in Python 3.5
songofacandy at gmail.com
Tue Mar 28 14:21:13 EDT 2017
On Wed, Mar 29, 2017 at 12:29 AM, Jan Gosmann <jan at hyper-world.de> wrote:
> On 28 Mar 2017, at 6:11, INADA Naoki wrote:
>> I managed to install pyopencl and run the script. It takes more than
>> 2 hours, and uses only 7GB RAM.
>> Maybe, some faster backend for OpenCL is required?
>> I used Microsoft Azure Compute, Standard_A4m_v2 (4 cores, 32 GB
>> memory) instance.
> I suppose that the computing power of the Azure instance might not be
> sufficient and it takes much longer to get to the phase where the memory
> requirements increase? Have you access to the output that was produced?
I suppose smaller and faster benchmark is better to others looking for it.
I already stopped the azure instance.
> By the way, this has nothing to do with OpenCL. OpenCL isn't used by the
> log_reduction.py script at all. It is listed in the dependencies because
> some other things use it.
Oh, it took my time much to run the benchmark...
All difficult dependencies are required to run the benchmark.
>> More easy way to reproduce is needed...
> Yes, I agree, but it's not super easy (all the smaller existing examples
> don't exhibit the problem so far), but I'll see what I can do.
There are no need to make swapping. Looking difference of normal RAM
usage is enough.
There are no maxrss difference in "smaller existing examples"?
> I suppose you are right that from the VMM and RSS numbers one cannot deduce
> fragmentation. But I think RSS in this case might not be meaningful either.
> My understanding from [the Wikipedia description] is that it doesn't account
> for parts of the memory that have been written to the swap. Or in other
> words RSS will never exceed the size of the physical RAM. VSS is also only
> partially useful because it just gives the size of the address space of
> which not all might be used?
I just meant "VSS is not evidence of fragmentation. Don't suppose
fragmentation cause the problem.".
> Anyways, I'm getting a swap usage of about 30GB with Python 3.6 and zsh's
> time reports 2339977 page faults from disk vs. 107 for Python 3.5.
> I have some code to measure the unique set size (USS) and will see what
> numbers I get with that.
I want to investigate RAM usage, without any swapping.
More information about the Python-list