On Wed, Feb 27, 2019 at 9:59 PM Victor Stinner <vstinner@redhat.com> wrote:
Maybe pickle is inefficient in its memory management and causes a lot
of memory fragmentation?

No, it is not relating to pickle efficiency and memory fragmentation.
This problem is happened because pymalloc doesn't have no hysteresis
between map & unmap arenas.
Any workload creating some objects and release them soon may affected
by this problem.

When there are no free pool, pymalloc use  mmap to create new arena (256KB).
pymalloc allocates new pools (= pages) from the arena.  It cause minor fault.
Linux allocates real memory to the page and RSS is increased.

Then, when all objects newly created in the pool are destroyed, all pools in the
arena are free.  pymalloc calls munmap() soon.

unpickle can be affected by this problem than pure Python because:

* unpickle is creating Python objects quickly.  fault overhead is relatively large.
* Python code may creates junk memory block (e.g. cached frame object, freeslot, etc)
  but C pickle code doesn't creates such junks.  So newly allocated pools are
  freed very easily.

I think this issue can be avoided easily.  When arena is empty but the arena
is head of usable_arenas, don't call munmap for it.
I confirmed m2.py can't reproduce the issue with this patch.

diff --git a/Objects/obmalloc.c b/Objects/obmalloc.c
index 1c2a32050f..a19b3aca06 100644
--- a/Objects/obmalloc.c
+++ b/Objects/obmalloc.c
@@ -1672,7 +1672,7 @@ pymalloc_free(void *ctx, void *p)
      *    nfreepools.
      * 4. Else there's nothing more to do.
      */
-    if (nf == ao->ntotalpools) {
+    if (nf == ao->ntotalpools && ao != usable_arenas) {
         /* Case 1.  First unlink ao from usable_arenas.
          */
         assert(ao->prevarena == NULL ||

--
INADA Naoki  <songofacandy@gmail.com>