PC locks up with list operations
nobody at nowhere.com
Mon Sep 12 08:19:25 EDT 2011
On Wed, 31 Aug 2011 22:47:59 +1000, Steven D'Aprano wrote:
>> Linux seems to fair badly when programs use more memory than physically
>> available. Perhaps there's some per-process thing that can be used to
>> limit things on Linux?
> As far as I know, ulimit ("user limit") won't help. It can limit the amount
> of RAM available to a process, but that just makes the process start using
> virtual memory more quickly. It can also limit the amount of virtual memory
> used by the shell, but not of other processes. In other words, Linux will
> try really, really, really hard to give you the 84 gigabytes you've asked
> for on a 2 GB system, even if it means DOSing your system for a month.
> Of course, I would be very happy to learn I'm wrong.
Resource limits set by ulimit are inherited by any processes spawned from
the shell. They also affect the shell itself, but a shell process
shouldn't require a lot of resources. You can use a subshell if you want
to impose limits on a specific process.
For Python, setting the limit on virtual memory (RAM + swap) to no more
than the amount of physical RAM is probably a wise move. Some processes
can use swap effectively, but the typical Python program probably can't.
There are exceptions, e.g. if most of the memory is accounted for by large
NumPy arrays and you're careful about the operations which are performed
upon them. But using large amounts of memory for many small objects is
almost bound to result in swap-thrashing.
One problem with doing this automatically (e.g. in .pythonrc) is the
inheritance issue; any processes spawned from the interpreter will also be
resource limited. Similarly, any binary libraries loaded into the
interpreter will be subject to the process' resource limits. Consequently,
there would be some advantages to the Python interpreter having its own
More information about the Python-list