Safe destruction of recursive objects

Hi Guido, When a user does the following with standard Python: tup = () for i in xrange(100000): tup = (tup, i) del tup # ka-boom He will get a core dump due to stack limitations. Recently, I changed Stackless Python to be safe for any recursive object built from lists, tuples, dictionaries, tracebacks and frames. The implementation is not Stackless Python dependant and very efficient (for my eyes at least). For efficiency, locality and minimum changes to five modules, it is implemented as two embracing macroes which are stuffed around the bodies of the deallocator methods, that makes just 3-4 lines of change for every module. (Well, the macro *can* be expanded if you like that more) I can submit patches, but please have a look at the example below, to save me the time in case you don't like it. It works great for SLP. cheers - chris -------------------------------------- Example of modified list deallocator: /* Methods */ static void list_dealloc(op) PyListObject *op; { int i; Py_TRASHCAN_SAFE_BEGIN(op) if (op->ob_item != NULL) { /* Do it backwards, for Christian Tismer. There's a simple test case where somehow this reduces thrashing when a *very* large list is created and immediately deleted. */ i = op->ob_size; while (--i >= 0) { Py_XDECREF(op->ob_item[i]); } free((ANY *)op->ob_item); } free((ANY *)op); Py_TRASHCAN_SAFE_END(op) } This is the original 1.5.2+ code, with two macro lines added. -------------------------------------- Here the macro code (which may of course be expanded) #define PyTrash_UNWIND_LEVEL 50 #define Py_TRASHCAN_SAFE_BEGIN(op) \ { \ ++_PyTrash_delete_nesting; \ if (_PyTrash_delete_nesting < PyTrash_UNWIND_LEVEL) { \ #define Py_TRASHCAN_SAFE_END(op) \ ;} \ else { \ if (!_PyTrash_delete_later) \ _PyTrash_delete_later = PyList_New(0); \ if (_PyTrash_delete_later) \ PyList_Append(_PyTrash_delete_later, (PyObject *)op); \ } \ --_PyTrash_delete_nesting; \ while (_PyTrash_delete_later && _PyTrash_delete_nesting <= 0) { \ PyObject *shredder = _PyTrash_delete_later; \ _PyTrash_delete_later = NULL; \ ++_PyTrash_delete_nesting; \ Py_DECREF(shredder); \ --_PyTrash_delete_nesting; \ } \ } \ extern DL_IMPORT(int) _PyTrash_delete_nesting; extern DL_IMPORT(PyObject *) _PyTrash_delete_later; -- Christian Tismer :^) <mailto:tismer@appliedbiometrics.com> Applied Biometrics GmbH : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF we're tired of banana software - shipped green, ripens at home

[Christian Tismer]
It's a nice approach, but I'd rather see you put the bulk of the Py_TRASHCAN_SAFE_END macro into a real function or two, invoked from the macro. This code is only going to get hairier if GregS takes up his free-threading quest again. Like #define Py_TRASHCAN_SAFE_END(op) \ ;} \ else \ /* call a function to enqueue op, & maybe create list */ \ --_PyTrash_delete_nesting; \ if (_PyTrash_delete_later && _PyTrash_delete_nesting <= 0) \ /* call a function to (maybe) clean up */ \ } \ The first function only gets called when the nesting level hits (exactly) 50, and the 2nd function only when something got enqueued and the stack has completely unwound again. These should be infrequent enough that inline expansion doesn't buy much -- except the inability to set a useful breakpoint when the code fails to work <wink>.

I encourage Christian to submit a patch, taking Tim's modification into account. --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
I encourage Christian to submit a patch, taking Tim's modification into account.
This is wonderful, since I now have 6 files less to maintain. Also I think Tim's comment is very valid. Not that the inlined code costs much, but the debugging issue really counts. That means I will keep the macro style for speed, not expanding it since this is easier to maintain, and I add an extra function for "the" action. Takes a day or two. cheers - chris -- Christian Tismer :^) <mailto:tismer@appliedbiometrics.com> Applied Biometrics GmbH : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF we're tired of banana software - shipped green, ripens at home

On Sun, 27 Feb 2000, Tim Peters wrote:
Call it a guarantee that I'll be doing the free-threading again. I won't start on it until, say, June or so, but I've outlined a number of tasks in private email to Guido that could be done before the "hard core" stuff starts. Over the weekend, I realized that I should post that "plan of attack" to the thread-sig. Others may be interested in helping to move the free-threading along. Note that it is unclear whether free-threading will be a patch set against 1.6, or a ./configure option (and a set of #ifdefs) within the standard distribution. I believe that it mostly depends on the timing of completing the work vs. the timing of the 1.6 release (if I understand Guido properly). This is part of the reason why I came up with an attack plan: there are things that can be done, tested, and integrated into Python without going full-on free-threading. (and thus minimizing any post-1.6 patch set) I'll compose the email later this week... Cheers, -g p.s. I'm personally motivated to do the free-threading again because I'm going to write a mod_python for Apache 2.0. Apache 2.0 uses a threading model (rather than a forking model) whenever it can. Thus, a mod_python built against a free-threaded Python will offer far superior performance compared to the global-lock version. -- Greg Stein, http://www.lyra.org/

[Christian Tismer]
It's a nice approach, but I'd rather see you put the bulk of the Py_TRASHCAN_SAFE_END macro into a real function or two, invoked from the macro. This code is only going to get hairier if GregS takes up his free-threading quest again. Like #define Py_TRASHCAN_SAFE_END(op) \ ;} \ else \ /* call a function to enqueue op, & maybe create list */ \ --_PyTrash_delete_nesting; \ if (_PyTrash_delete_later && _PyTrash_delete_nesting <= 0) \ /* call a function to (maybe) clean up */ \ } \ The first function only gets called when the nesting level hits (exactly) 50, and the 2nd function only when something got enqueued and the stack has completely unwound again. These should be infrequent enough that inline expansion doesn't buy much -- except the inability to set a useful breakpoint when the code fails to work <wink>.

I encourage Christian to submit a patch, taking Tim's modification into account. --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
I encourage Christian to submit a patch, taking Tim's modification into account.
This is wonderful, since I now have 6 files less to maintain. Also I think Tim's comment is very valid. Not that the inlined code costs much, but the debugging issue really counts. That means I will keep the macro style for speed, not expanding it since this is easier to maintain, and I add an extra function for "the" action. Takes a day or two. cheers - chris -- Christian Tismer :^) <mailto:tismer@appliedbiometrics.com> Applied Biometrics GmbH : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF we're tired of banana software - shipped green, ripens at home

On Sun, 27 Feb 2000, Tim Peters wrote:
Call it a guarantee that I'll be doing the free-threading again. I won't start on it until, say, June or so, but I've outlined a number of tasks in private email to Guido that could be done before the "hard core" stuff starts. Over the weekend, I realized that I should post that "plan of attack" to the thread-sig. Others may be interested in helping to move the free-threading along. Note that it is unclear whether free-threading will be a patch set against 1.6, or a ./configure option (and a set of #ifdefs) within the standard distribution. I believe that it mostly depends on the timing of completing the work vs. the timing of the 1.6 release (if I understand Guido properly). This is part of the reason why I came up with an attack plan: there are things that can be done, tested, and integrated into Python without going full-on free-threading. (and thus minimizing any post-1.6 patch set) I'll compose the email later this week... Cheers, -g p.s. I'm personally motivated to do the free-threading again because I'm going to write a mod_python for Apache 2.0. Apache 2.0 uses a threading model (rather than a forking model) whenever it can. Thus, a mod_python built against a free-threaded Python will offer far superior performance compared to the global-lock version. -- Greg Stein, http://www.lyra.org/
participants (4)
-
Christian Tismer
-
Greg Stein
-
Guido van Rossum
-
Tim Peters