![](https://secure.gravatar.com/avatar/55f7acf47233a7a98f5eb9dfd0b2d763.jpg?s=120&d=mm&r=g)
I've just been looking at how numpy handles changing the behaviour that is triggered when there are numeric error conditions (overflow, underflow, etc.). If I understand it correctly, and that's a big if, I don't think I like it nearly as much as the what numarray has in place. It appears that numpy uses the two functions, seterr and geterr, to set and query the error handling. These set/read a secret variable stored in the local scope. I assume that the various ufuncs then examine that value to determine how to handle errors. The secret variable approach is a little clunky, but that's not what concerns me. What concerns me is that this approach is *only* useful for built in numpy functions and falls down if we call any user defined functions. Suppose we want to be warned on underflow. Setting this is as simple as: def func(*args): numpy.seterr(under='warn') # do stuff with args return result Since seterr is local to the function, we don't have to reset the error handling at the end, which is convenient. And, this works fine if all we are doing is calling numpy functions and methods. However, if we are calling a function of our own devising we're out of luck since the called function will not inherit the error settings that we have set. Thus we have no way to influence the error settings of functions downstream from us. Compare this to the numarray approach, where you write code like the following: def func(*args) numarray.Error.pushMode(underflow='warn') try: # do stuff with args return result finally: numarray.Error.popMode() At first glance this looks a lot more clunky. However, it's also a lot more useful. In this case, pushMode pushes a new error mode value onto a global[1] stack of mode values. Thus, any functions that func calls, will inherit it's error settings, unless they explicitly override them. As for the clunkiness factor, that's partly just the choice of camelCase versus smashwords, but also the try-finally. The second, however, will be addressed in Python 2.5 with the 'with' statement. At that point, we could have a numpy equivalent to the above numarray code that looked something like: def func(*args) with numpy.errors(underflow='warn'): # do stuff with args return result I believe we should rework the error handling to follow the numarray model before things get set in stone at 1.0. The 'with' statement support can be worked in a bit later if need be, although I suspect that will be easy. I also would prefer more verbose keys ala numarray (underflow, overflow, dicidebyzero and invalid) than those currently used by numpy (under, over, divide and invalid). And (will he never stop) I like numarrays defaults better here too: overflow='warn', underflow='ignore', dividebyzero='warn', invalid='warn'. Currently, numpy defaults to ignore for all cases. These last points are relatively minor though. Regards, -tim [1] I believe it's actually thread local, and if it's not it should be, but that detail is not important for our discussion here.
![](https://secure.gravatar.com/avatar/49df8cd4b1b6056c727778925f86147a.jpg?s=120&d=mm&r=g)
Tim Hochberg wrote:
This approach was decided on after discussions with Guido who didn't like the idea of pushing and popping from a global stack. I'm not sure I'm completely in love with it my self, but it is actually more flexible then the numarray approach. You can get the numarray approach back simply by setting the error in the builtin scope (instead of in the local scope which is done by default. Then, at the end of the function, you can restore it. If it was felt useful to create a stack to handle this on the builtin level then that is easily done as well. based on this and implement the pushMode popMode interface of numarray. But, I think this question does deserve a bit of debate. I don't think there has been a serious discussion over the method. To help Tim and others understand what happens: When a ufunc is called, a specific variable name is searched for in the following name-spaces in the following order: 1) local 2) global 3) builtin (There is a bit of an optimization in that when the error mode is the default mode --- do nothing, a global flag is set which by-passes the search for the name). The first time the variable name is found, the error mode is read from that variable. This error mode is placed as part of the ufunc loop object. At the end of each 1-d loop the IEEE error mode flags are checked (depending on the state of the error mode) and appropriate action taken. By the way, it would not be too difficult to change how the error mode is set (probably an hour's worth of work). So, concern over implementation changes should not be a factor right now. Currently the error mode is read from a variable using standard scoping rules. It would save the (not insignificant) name-space lookup time to instead use a global stack (i.e. a Python list) and just get the error mode from the top of that stack.
What's really the argument here, is whether having the flexibility at the local and global name-spaces really worth the extra name-lookups for each ufunc. I've argued that the numarray behavior can result from using the builtin namespace for the error control. (perhaps with better Python-side support for setting and retrieving it). What numpy has is control at the global and local namespace level as well which can override the builtin name-space behavior. So, we should at least frame the discussion in terms of what is actually possible.
In my mind, verbose keys are just extra baggage unless they are really self documenting. You just need reminders and clues. It seems to be a preference thing. I guess I hate typing long strings when only the first few letters clue me in to what is being talked about.
Good feedback. Thanks again for taking the time to look at this and offer review. -Travis
![](https://secure.gravatar.com/avatar/55f7acf47233a7a98f5eb9dfd0b2d763.jpg?s=120&d=mm&r=g)
Travis Oliphant wrote:
I saw that you could set it at different levels, but missed the implications. However, it's still missing one feature, thread local storage. I would argue that the __builtin__ data should actually be stored in threading.local() instead of __builtin__. Then you could setup an equivalent stack system to numpy's.
I've used the numarray error handling stuff for some time. My experience with it has led me to the following conclusions: 1. You don't use it that often. I have about 26 KLOC that's "active" and in that I use pushMode just 15 times. For comparison, I use asarray a tad over 100 times. 2. pushMode and popMode, modulo spelling, is the way to set errors. Once the with statement is around, that will be even better. 3. I, personally, would be very unlikely to use the local and global error handling, I'd just as soon see them go away, particularly if it helps performance, but I won't lobby for it.
In numarray, the stack is in the numarray module itself (actually in the Error object). They base their threading local behaviour off of thread.get_ident, not threading.local. That's not clunky at all, although it's arguably wrong since thread.get_ident can reuse ids from dead threads. In practice it's probably hard to get into trouble doing this, but I still wouldn't emulate it. I think that this was written before thread local storage, so it was probably the best that could be done. However, if you use threading.local, it will be clunky in a similar sense. You'll be storing data in a global namespace you don't control and you've got to hope that no one stomps on your variable name. When you have local and module level secret storage names as well you're just doing a lot more of that and the chance of collision and confusion goes up from almost zero to very small.
Yes. Modulo the thread local issue, I believe that this would indeed be easy.
Yes, sorry for spreading misinformation.
In this case, overflow, underflow and dividebyzero seem pretty self documenting to me. And 'invalid' is pretty cryptic in both implementations. This may be a matter of taste, but I tend to prefer short pithy names for functions that I use a lot, or that crammed a bunch to a line. In functions like this, that are more rarely used and get a full line to themselves I lean to towards the more verbose.
Can you elaborate on this a bit? Reading between the lines, there seem to be two issues related to speed here. One is the actual namespace lookup of the error mode -- there's a setting that says we are using the defaults, so don't bother to look. This saves the namespace lookup. Changing the defaults shouldn't affect the timing of that. I'm not sure how this would interact with thread local storage though. The second issue is that running the core loop with no checks in place is faster. That means that to get maximum performance you want to be running both at the default setting and with no checks, which implies that the default setting needs to be no checking. Is that correct? I think there should be a way to finesse this issue, but I'll wait for the dust to settle a bit on the local, global, builtin issue before I propose anything. Particularly since by finesse I mean: do something moderately unsavory.
So, I'm hesitant to change them based only on ambiguous preferences.
It's not entirely plucked out of the error. As I recall, the decision was arrived at something likes this: 1. Errors should never pass silently (unless explicitly silenced). 2. Let's have everything raise by default 3. In practice this was no good because you often wanted to look at the results and see where the problem was. 4. OK, let's have everything warn 5. This almost worked, but underflow was almost never a real error, so everyone always overrode underflow. A default that you always need to override is not a good default. 6. So, warn for everything except underflow. Ignore that. And that's where numarry is today. I and other have been using that error system happily for quite some time now. At least I haven't heard any complaints for quite a while.
Good feedback. Thanks again for taking the time to look at this and offer review.
You're very welcome. Thanks for all of the work you've been putting in to make the grand numerification happen. -tim
![](https://secure.gravatar.com/avatar/49df8cd4b1b6056c727778925f86147a.jpg?s=120&d=mm&r=g)
Tim Hochberg wrote:
What about PyThreadState_GetDict() ? and then default to use the builtin dictionary if this returns NULL? I'm actually not particularly enthused about the three name-space lookups. Changing it to only 1 place to look may be better. It would require a setting and restoring operation. A stack could be used, but why not just use local variables (i.e. save = numpy.seterr(dividebyzero='warn') ... numpy.seterr(restore=save)
This is good feedback. I have almost zero experience with changing the error handling. So, I'm not sure what features are desireable. Eliminating unnecessary name-lookups is usually a good thing.
Right, but thread local storage is still Python 2.4 only.... What about PyThreadState_GetDict() ?
The PyThreadState_GetDict() documenation states that extension module writers should use a unique name based on their extension module.
But you did point out the very important thread-local storage fact that I had missed. This alone makes me willing to revamp what we are doing.
The rarely-used factor is a persuasive argument. possibility). If this is 0 then the error checking is not even done (thus no error handling at all). Yes the name-lookup optimization could work with any defaults (but with thread-specific storage couldn't work anyway). One question I have with threads and error handling though? Right now, the ufuncs release the Python lock during computation (and re-acquire it to do error handling if needed). If another ufunc was started by another Python thread and ran with different error handling, wouldn't the IEEE flags get confused about which ufunc was setting what? The flags are only checked after each 1-d loop. If another thread set the processor flag, the current thread could get very confused. This seems like a problem that I'm not sure how to handle.
I can appreciate this choice, but I don't agree that errors should never pass silently. The fact that people disagree about this is the reason for the error handling. Note that overflow is not detected everywhere for integers --- we have to simulate the floating-point errors for them. Only on integer multiply is it detected. Checking for it would slow down all other integer arithmetic --- one solution, of course is to have two different integer additions (one that checks for overflow and another that doesn't). There is really a bit of work left here to do. Best, -Travis
![](https://secure.gravatar.com/avatar/55f7acf47233a7a98f5eb9dfd0b2d763.jpg?s=120&d=mm&r=g)
Travis Oliphant wrote:
That sounds reasonable. I've never used that, but the name sounds promising!
That would work as well, I think. It gets a little hairy if you want to set error nestedly in a single function, but I've never done that, so I'm not too worried about it. Besides, what I really want to support is 'with', which I imagine we can support using the above as a base.
I hope some of the other numarray users chime in. A sample of one is not very good data!
That sounds reasonable. Essentially we would be rolling our own threading.local()
Yeah, me either. It seems that somehow we'll need to block until all current operations are done, but I don't know how to do that off the top of my head. Perhaps ufuncs need to lock the flags when they start and release them when they finish. This looks feasible, but I'm not sure of the proper incantation to get this right. The ufuncs would all need to be able able to increment and decrement the lock, whatever it is, even though they are in different threads. Meanwhile the setting code should only be able to work when the lock is unheld. It's some sort of poly thread recursive lock thing. I'll think about it, perhaps there's an obvious way.
You'll notice that we ended up with a slightly more nuanced choice. Besides, the full quote is import: "errors should not pass silently unless explicitly silenced". That's quite a bit different than a blanket error should never pass silently.
The fact that people disagree about this is the reason for the error handling.
Yes. While I like the above defaults, if we have a reasonable approach I can just set them at startup and forget about them. Let's try not to penalize me too much for that though.
Or just document it and don't worry about it. If I'm doing integer arithmetic and I need overflow detection, I can generally cast to doubles and do my math there, casting back at the end as needed. This doesn't seem worth too much extra complication. Is my floating point bias showing?
There is really a bit of work left here to do.
Yep. Looks like it, but nothing insurmountable. -tim
![](https://secure.gravatar.com/avatar/d5321459a9b36ca748932987de93e083.jpg?s=120&d=mm&r=g)
Tim Hochberg wrote:
I am also absolutely no expert in this area, but isn't this exactly what the kernel supports multiple threads for? In other words, I'm not sure we have to worry about it at all. I expect that the kernel sets/restores the CPU/FPU error flags on thread switches and this is part of the cost associated with switching threads. As I understand it, linux threads are actually implemented as new processes, so if we did have to be worried about this, wouldn't we also have to be worried that program A might alter the FPU error state while we're also using program B? This is just my unsophisticated and possibly wrong understanding of these things. If anyone can help clarify the issue, I'd be glad to be enlightened. Cheers! Andrew
![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Travis Oliphant wrote: | save = numpy.seterr(dividebyzero='warn') | | ... | | numpy.seterr(restore=save) Most of this discussion is outside of my scope, but I have programmed this kind of pattern in a different way before: ~ save = context.push(something) ~ ... ~ del save i.e. the destructor of the saved context object restores the old situation. In most cases it will be called by letting "save" go out of scope. I know that relying on timely object destruction can be troublesome when porting to Jython, but it is very convenient in CPython. If that goes too far, one could make a separate method on save: ~ save.pop() This can do sanity checking too (are we really at the top of the stack? Only called once?). The destructor should check whether pop has been called. Rob - -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEL3EUH7J/Cv8rb3QRAuvsAJ9PO6ZITdVSm+hIwxkWDHHbTNFHdQCcDSWI Iv7gupkFc8+Fby/5MFwHQf4= =zE/o -----END PGP SIGNATURE-----
![](https://secure.gravatar.com/avatar/55f7acf47233a7a98f5eb9dfd0b2d763.jpg?s=120&d=mm&r=g)
Rob Hooft wrote:
Well, the syntax that *I* really want is this: class error_mode(object): def __init__(self, all=None, overflow=None, underflow=None, dividebyzero=None, invalid=None): self._args = (overflow, overflow, underflow, dividebyzero, invalid) def __enter__(self): self._save = numpy.seterr(*self._args) def __exit__(self): numpy.seterr(self._save) That way, in a few months, I can do this: with error_mode(overflow='raise'): # do stuff and it will be almost impossible to mess up. This syntax is lighter and cleaner than a stack or relying on garbage collection to free the resources. So, for my purposes, the simple syntax Travis proposes is perfectly adequate and simpler to implement and get right than a stack based approach. If 'with' wasn't coming down the pipe, I would push for a stack, but I like Travis' proposal just fine. YMMV of course. -tim
![](https://secure.gravatar.com/avatar/49df8cd4b1b6056c727778925f86147a.jpg?s=120&d=mm&r=g)
Tim Hochberg wrote:
This approach was decided on after discussions with Guido who didn't like the idea of pushing and popping from a global stack. I'm not sure I'm completely in love with it my self, but it is actually more flexible then the numarray approach. You can get the numarray approach back simply by setting the error in the builtin scope (instead of in the local scope which is done by default. Then, at the end of the function, you can restore it. If it was felt useful to create a stack to handle this on the builtin level then that is easily done as well. based on this and implement the pushMode popMode interface of numarray. But, I think this question does deserve a bit of debate. I don't think there has been a serious discussion over the method. To help Tim and others understand what happens: When a ufunc is called, a specific variable name is searched for in the following name-spaces in the following order: 1) local 2) global 3) builtin (There is a bit of an optimization in that when the error mode is the default mode --- do nothing, a global flag is set which by-passes the search for the name). The first time the variable name is found, the error mode is read from that variable. This error mode is placed as part of the ufunc loop object. At the end of each 1-d loop the IEEE error mode flags are checked (depending on the state of the error mode) and appropriate action taken. By the way, it would not be too difficult to change how the error mode is set (probably an hour's worth of work). So, concern over implementation changes should not be a factor right now. Currently the error mode is read from a variable using standard scoping rules. It would save the (not insignificant) name-space lookup time to instead use a global stack (i.e. a Python list) and just get the error mode from the top of that stack.
What's really the argument here, is whether having the flexibility at the local and global name-spaces really worth the extra name-lookups for each ufunc. I've argued that the numarray behavior can result from using the builtin namespace for the error control. (perhaps with better Python-side support for setting and retrieving it). What numpy has is control at the global and local namespace level as well which can override the builtin name-space behavior. So, we should at least frame the discussion in terms of what is actually possible.
In my mind, verbose keys are just extra baggage unless they are really self documenting. You just need reminders and clues. It seems to be a preference thing. I guess I hate typing long strings when only the first few letters clue me in to what is being talked about.
Good feedback. Thanks again for taking the time to look at this and offer review. -Travis
![](https://secure.gravatar.com/avatar/55f7acf47233a7a98f5eb9dfd0b2d763.jpg?s=120&d=mm&r=g)
Travis Oliphant wrote:
I saw that you could set it at different levels, but missed the implications. However, it's still missing one feature, thread local storage. I would argue that the __builtin__ data should actually be stored in threading.local() instead of __builtin__. Then you could setup an equivalent stack system to numpy's.
I've used the numarray error handling stuff for some time. My experience with it has led me to the following conclusions: 1. You don't use it that often. I have about 26 KLOC that's "active" and in that I use pushMode just 15 times. For comparison, I use asarray a tad over 100 times. 2. pushMode and popMode, modulo spelling, is the way to set errors. Once the with statement is around, that will be even better. 3. I, personally, would be very unlikely to use the local and global error handling, I'd just as soon see them go away, particularly if it helps performance, but I won't lobby for it.
In numarray, the stack is in the numarray module itself (actually in the Error object). They base their threading local behaviour off of thread.get_ident, not threading.local. That's not clunky at all, although it's arguably wrong since thread.get_ident can reuse ids from dead threads. In practice it's probably hard to get into trouble doing this, but I still wouldn't emulate it. I think that this was written before thread local storage, so it was probably the best that could be done. However, if you use threading.local, it will be clunky in a similar sense. You'll be storing data in a global namespace you don't control and you've got to hope that no one stomps on your variable name. When you have local and module level secret storage names as well you're just doing a lot more of that and the chance of collision and confusion goes up from almost zero to very small.
Yes. Modulo the thread local issue, I believe that this would indeed be easy.
Yes, sorry for spreading misinformation.
In this case, overflow, underflow and dividebyzero seem pretty self documenting to me. And 'invalid' is pretty cryptic in both implementations. This may be a matter of taste, but I tend to prefer short pithy names for functions that I use a lot, or that crammed a bunch to a line. In functions like this, that are more rarely used and get a full line to themselves I lean to towards the more verbose.
Can you elaborate on this a bit? Reading between the lines, there seem to be two issues related to speed here. One is the actual namespace lookup of the error mode -- there's a setting that says we are using the defaults, so don't bother to look. This saves the namespace lookup. Changing the defaults shouldn't affect the timing of that. I'm not sure how this would interact with thread local storage though. The second issue is that running the core loop with no checks in place is faster. That means that to get maximum performance you want to be running both at the default setting and with no checks, which implies that the default setting needs to be no checking. Is that correct? I think there should be a way to finesse this issue, but I'll wait for the dust to settle a bit on the local, global, builtin issue before I propose anything. Particularly since by finesse I mean: do something moderately unsavory.
So, I'm hesitant to change them based only on ambiguous preferences.
It's not entirely plucked out of the error. As I recall, the decision was arrived at something likes this: 1. Errors should never pass silently (unless explicitly silenced). 2. Let's have everything raise by default 3. In practice this was no good because you often wanted to look at the results and see where the problem was. 4. OK, let's have everything warn 5. This almost worked, but underflow was almost never a real error, so everyone always overrode underflow. A default that you always need to override is not a good default. 6. So, warn for everything except underflow. Ignore that. And that's where numarry is today. I and other have been using that error system happily for quite some time now. At least I haven't heard any complaints for quite a while.
Good feedback. Thanks again for taking the time to look at this and offer review.
You're very welcome. Thanks for all of the work you've been putting in to make the grand numerification happen. -tim
![](https://secure.gravatar.com/avatar/49df8cd4b1b6056c727778925f86147a.jpg?s=120&d=mm&r=g)
Tim Hochberg wrote:
What about PyThreadState_GetDict() ? and then default to use the builtin dictionary if this returns NULL? I'm actually not particularly enthused about the three name-space lookups. Changing it to only 1 place to look may be better. It would require a setting and restoring operation. A stack could be used, but why not just use local variables (i.e. save = numpy.seterr(dividebyzero='warn') ... numpy.seterr(restore=save)
This is good feedback. I have almost zero experience with changing the error handling. So, I'm not sure what features are desireable. Eliminating unnecessary name-lookups is usually a good thing.
Right, but thread local storage is still Python 2.4 only.... What about PyThreadState_GetDict() ?
The PyThreadState_GetDict() documenation states that extension module writers should use a unique name based on their extension module.
But you did point out the very important thread-local storage fact that I had missed. This alone makes me willing to revamp what we are doing.
The rarely-used factor is a persuasive argument. possibility). If this is 0 then the error checking is not even done (thus no error handling at all). Yes the name-lookup optimization could work with any defaults (but with thread-specific storage couldn't work anyway). One question I have with threads and error handling though? Right now, the ufuncs release the Python lock during computation (and re-acquire it to do error handling if needed). If another ufunc was started by another Python thread and ran with different error handling, wouldn't the IEEE flags get confused about which ufunc was setting what? The flags are only checked after each 1-d loop. If another thread set the processor flag, the current thread could get very confused. This seems like a problem that I'm not sure how to handle.
I can appreciate this choice, but I don't agree that errors should never pass silently. The fact that people disagree about this is the reason for the error handling. Note that overflow is not detected everywhere for integers --- we have to simulate the floating-point errors for them. Only on integer multiply is it detected. Checking for it would slow down all other integer arithmetic --- one solution, of course is to have two different integer additions (one that checks for overflow and another that doesn't). There is really a bit of work left here to do. Best, -Travis
![](https://secure.gravatar.com/avatar/55f7acf47233a7a98f5eb9dfd0b2d763.jpg?s=120&d=mm&r=g)
Travis Oliphant wrote:
That sounds reasonable. I've never used that, but the name sounds promising!
That would work as well, I think. It gets a little hairy if you want to set error nestedly in a single function, but I've never done that, so I'm not too worried about it. Besides, what I really want to support is 'with', which I imagine we can support using the above as a base.
I hope some of the other numarray users chime in. A sample of one is not very good data!
That sounds reasonable. Essentially we would be rolling our own threading.local()
Yeah, me either. It seems that somehow we'll need to block until all current operations are done, but I don't know how to do that off the top of my head. Perhaps ufuncs need to lock the flags when they start and release them when they finish. This looks feasible, but I'm not sure of the proper incantation to get this right. The ufuncs would all need to be able able to increment and decrement the lock, whatever it is, even though they are in different threads. Meanwhile the setting code should only be able to work when the lock is unheld. It's some sort of poly thread recursive lock thing. I'll think about it, perhaps there's an obvious way.
You'll notice that we ended up with a slightly more nuanced choice. Besides, the full quote is import: "errors should not pass silently unless explicitly silenced". That's quite a bit different than a blanket error should never pass silently.
The fact that people disagree about this is the reason for the error handling.
Yes. While I like the above defaults, if we have a reasonable approach I can just set them at startup and forget about them. Let's try not to penalize me too much for that though.
Or just document it and don't worry about it. If I'm doing integer arithmetic and I need overflow detection, I can generally cast to doubles and do my math there, casting back at the end as needed. This doesn't seem worth too much extra complication. Is my floating point bias showing?
There is really a bit of work left here to do.
Yep. Looks like it, but nothing insurmountable. -tim
![](https://secure.gravatar.com/avatar/d5321459a9b36ca748932987de93e083.jpg?s=120&d=mm&r=g)
Tim Hochberg wrote:
I am also absolutely no expert in this area, but isn't this exactly what the kernel supports multiple threads for? In other words, I'm not sure we have to worry about it at all. I expect that the kernel sets/restores the CPU/FPU error flags on thread switches and this is part of the cost associated with switching threads. As I understand it, linux threads are actually implemented as new processes, so if we did have to be worried about this, wouldn't we also have to be worried that program A might alter the FPU error state while we're also using program B? This is just my unsophisticated and possibly wrong understanding of these things. If anyone can help clarify the issue, I'd be glad to be enlightened. Cheers! Andrew
![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Travis Oliphant wrote: | save = numpy.seterr(dividebyzero='warn') | | ... | | numpy.seterr(restore=save) Most of this discussion is outside of my scope, but I have programmed this kind of pattern in a different way before: ~ save = context.push(something) ~ ... ~ del save i.e. the destructor of the saved context object restores the old situation. In most cases it will be called by letting "save" go out of scope. I know that relying on timely object destruction can be troublesome when porting to Jython, but it is very convenient in CPython. If that goes too far, one could make a separate method on save: ~ save.pop() This can do sanity checking too (are we really at the top of the stack? Only called once?). The destructor should check whether pop has been called. Rob - -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEL3EUH7J/Cv8rb3QRAuvsAJ9PO6ZITdVSm+hIwxkWDHHbTNFHdQCcDSWI Iv7gupkFc8+Fby/5MFwHQf4= =zE/o -----END PGP SIGNATURE-----
![](https://secure.gravatar.com/avatar/55f7acf47233a7a98f5eb9dfd0b2d763.jpg?s=120&d=mm&r=g)
Rob Hooft wrote:
Well, the syntax that *I* really want is this: class error_mode(object): def __init__(self, all=None, overflow=None, underflow=None, dividebyzero=None, invalid=None): self._args = (overflow, overflow, underflow, dividebyzero, invalid) def __enter__(self): self._save = numpy.seterr(*self._args) def __exit__(self): numpy.seterr(self._save) That way, in a few months, I can do this: with error_mode(overflow='raise'): # do stuff and it will be almost impossible to mess up. This syntax is lighter and cleaner than a stack or relying on garbage collection to free the resources. So, for my purposes, the simple syntax Travis proposes is perfectly adequate and simpler to implement and get right than a stack based approach. If 'with' wasn't coming down the pipe, I would push for a stack, but I like Travis' proposal just fine. YMMV of course. -tim
participants (4)
-
Andrew Straw
-
Rob Hooft
-
Tim Hochberg
-
Travis Oliphant