"High water" Memory fragmentation still a thing?

bryanjugglercryptographer at yahoo.com bryanjugglercryptographer at yahoo.com
Wed Oct 8 19:28:39 CEST 2014


Chris Angelico wrote:
> Sure, and that's all well and good. But what I just cited there *is* a
> shipping product. That's a live server that runs a game that I'm admin
> of. So it's possible to do without the resource safety net of periodic
> restarts.

Nice that the non-Python server you administer stayed up for 88 weeks, but that doesn't really have anything to do with the issue here. The answer to the OP's title question is "yes", high-water memory fragmentation is a real thing, in most platforms including CPython.

The cited article tells of Celery hitting the problem, and the working solution was to "roll the celery worker processes". That doesn't mean to tell a human administrator to regularly restart the server. It's programmatic and it's a reasonably simple and well-established design pattern.

> > For an example see the Apache HTTP daemon, particularly the classic pre-forking server. There's a configuration parameter, "MaxRequestsPerChild", that sets how many requests a process should answer before terminating.
> 
> That assumes that requests can be handled equally by any server
> process - and more so, that there are such things as discrete
> requests. That's true of HTTP, but not of everything. 

It's true of HTTP and many other protocols because they were designed to support robust operation even as individual components may fail.

> And even with
> HTTP, if you do "long polls" [1] then clients might remain connected
> for arbitrary lengths of time; either you have to cut them off when
> you terminate the server process (hopefully that isn't too often, or
> you lose the benefit of long polling), or you retain processes for
> much longer.

If you look at actual long-polling protocols, you'll see that the server occasionally closing connections is no problem at all. They're actually designed to be robust even against connections that drop without proper shutdown.

> Restarting isn't necessary. It's like rebooting a computer: people get
> into the habit of doing it, because it "fixes problems", but all that
> means is that it allows you to get sloppy with resource management.

CPython, and for that matter malloc/free, have known problems in resource management, such as the fragmentation issue noted here. There are more. Try a Google site search for "memory leak" on http://bugs.python.org/. Do you think the last memory leak is fixed now?

>From what I've seen, planned process replacement is the primary techniques to support long-lived mission-critical services in face of resource management flaws. Upon process termination the OS recovers the resources. I love CPython, but on this point I trust the Linux kernel much more.

-- 
--Bryan



More information about the Python-list mailing list