Re: [Python-Dev] stack check on Unix: any suggestions?
I'm confused now: how is this counting-stack-limit different from the maximum recursion depth we already have? The whole point of PyOS_StackCheck is to do an _actual_ check of whether there's space left for the stack so we can hopefully have an orderly cleanup before we hit the hard limit. If computing it is too difficult because getrlimit isn't available or doesn't do what we want we should probe it, as the windows code does or my example code posted yesterday does. Note that the testing only has to be done every *first* time the stack goes past a certain boundary: the probing can remember the deepest currently known valid stack location, and everything that is shallower is okay from that point on (making PyOS_StackCheck a subroutine call and a compare in the normal case). -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
Jack Jansen wrote:
I'm confused now: how is this counting-stack-limit different from the maximum recursion depth we already have?
The whole point of PyOS_StackCheck is to do an _actual_ check of whether there's space left for the stack so we can hopefully have an orderly cleanup before we hit the hard limit.
If computing it is too difficult because getrlimit isn't available or doesn't do what we want we should probe it, as the windows code does or my example code posted yesterday does. Note that the testing only has to be done every *first* time the stack goes past a certain boundary: the probing can remember the deepest currently known valid stack location, and everything that is shallower is okay from that point on (making PyOS_StackCheck a subroutine call and a compare in the normal case).
getrlimit() will not always work: in case there is no limit imposed on the stack, it will return huge numbers (e.g. 2GB) which wouldn't make any valid assumption possible. Note that you can't probe for this since you can not be sure whether the OS overcommits memory or not. Linux does this heavily and I haven't yet even found out why my small C program happily consumes 20MB of memory without segfault at recursion level 60000 while Python already segfaults at recursion level 9xxx with a memory footprint of around 5MB. So, at least for Linux, the only safe way seems to make the limit a user option and to set a reasonably low default. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/
Jack Jansen writes:
I'm confused now: how is this counting-stack-limit different from the maximum recursion depth we already have?
Because on Unix the maximum allowable stack space is not fixed (it can be controlled by "ulimit" or "setrlimit"), so a hard-coded maximum recursion depth is not appropriate.
The whole point of PyOS_StackCheck is to do an _actual_ check of whether before we hit the hard limit.
If computing it is too difficult because getrlimit isn't available or doesn't do what we want we should probe it
getrlimit is available and works fine. It's getrusage that is problematic. I seriously think that instead of trying to slip this in `under the wire' we should defer for 2.0b1 and try to do it right for either the next 2.0x. Getting this stuff right on Unix, portably, is tricky. There may be a lot of different tricks required to make this work right on different flavors of Unix.
I'm confused now: how is this counting-stack-limit different from the maximum recursion depth we already have?
The whole point of PyOS_StackCheck is to do an _actual_ check of whether there's space left for the stack so we can hopefully have an orderly cleanup before we hit the hard limit.
If computing it is too difficult because getrlimit isn't available or doesn't do what we want we should probe it, as the windows code does or my example code posted yesterday does. Note that the testing only has to be done every *first* time the stack goes past a certain boundary: the probing can remember the deepest currently known valid stack location, and everything that is shallower is okay from that point on (making PyOS_StackCheck a subroutine call and a compare in the normal case).
The point is that there's no portable way to do PyOS_CheckStack(). Not even for Unix. So we use a double strategy: (1) Use a user-settable recursion limit with a conservative default. This can be done portably. It is set low by default so that under reasonable assumptions it will stop runaway recursion long before the stack is actually exhausted. Note that Emacs Lisp has this feature and uses a default of 500. I would set it to 1000 in Python. The occasional user who is fond of deep recursion can set it higher and tweak his ulimit -s to provide the actual stack space if necessary. (2) Where implementable, use actual stack probing with PyOS_CheckStack(). This provides an additional safeguard for e.g. (1) extensions allocating lots of C stack space during recursion; (2) users who set the recursion limit too high; (3) long-running server processes. --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
participants (4)
-
Charles G Waldman -
Guido van Rossum -
Jack Jansen -
M.-A. Lemburg