"High water" Memory fragmentation still a thing?

bryanjugglercryptographer at yahoo.com bryanjugglercryptographer at yahoo.com
Mon Oct 6 18:21:50 CEST 2014

Chris Angelico wrote:
> This is why a lot of long-duration processes are built to be restarted
> periodically. It's not strictly necessary, but it can be the most
> effective way of solving a problem. I tend to ignore that, though, and
> let my processes just keep on running... for 88 wk 4d 23:56:27 so far,
> on one of those processes. It's consuming less than half a gig of
> virtual memory, quarter gig resident, and it's been doing a fair bit [...]

A shipping product has to meet a higher standard. Planned process mortality is a reasonably simple strategy for building robust services from tools that have flaws in resource management. It assumes only that the operating system reliably reclaims resources from dead processes. 

The usual pattern is to have one or two parent processes that keep several worker processes running but do not themselves directly serve clients. The workers do the heavy lifting and are programmed to eventually die, letting younger workers take over.

For an example see the Apache HTTP daemon, particularly the classic pre-forking server. There's a configuration parameter, "MaxRequestsPerChild", that sets how many requests a process should answer before terminating.

More information about the Python-list mailing list