... Here are some stats from running [memcrunch.py] under my PR, but using 200 times the initial number of objects as the original script:
n = 20000000 #number of things
At the end, with 1M arena and 16K pool:
3362 arenas * 1048576 bytes/arena = 3,525,312,512 # bytes in allocated blocks = 1,968,233,888 ... With the larger arenas, none [arenas] were ever released.
BTW, anyone keen to complicate the mmap management should first take this recent change into account::
That appears to have killed off _the_ most overwhelmingly common cause of obmalloc counter-productively releasing an arena only to create a new one again milliseconds later.
My branch, and Neil's, both contain that change, which makes it much harder to compare our branches' obmalloc arena stats with 3.7. It turns out that a whole lot of "released arenas" under 3.7 (and will still be so in 3.8) were due to that worse-than-useless arena thrashing.
To illustrate, I reverted that change in my PR and ran exactly same thing. Wow - _then_ the 1M-arena-16K-pool PR reclaimed 1135(!) arenas instead of none. Almost all worse than uselessly. The only one that "paid" was the last: the run ended with 3361 arenas still in use instead of 3362. Because with the change, one entirely empty arena remained on the usable_arenas list.
So, word to the wise: when looking at _debugmallocstats() output, like:
# arenas allocated total = 4,496 # arenas reclaimed = 1,135 # arenas highwater mark = 3,362 # arenas allocated current = 3,361 3361 arenas * 1048576 bytes/arena = 3,524,263,936
the number "reclaimed" isn't really telling us much: before 3.9, it may be telling us only how many times obmalloc wasted time on useless arena thrashing.
The _important_ bit is the difference between "highwater mark" and "allocated current". That's how much peak arena address reservation declined. In this run, it only managed to release one empty arena from the peak (which the actual PR does not release, because bpo-37257 changed this to keep (at most) one empty arena available for reuse).