Alternative to iterator unpacking that wraps iterator-produced ValueError
To put it simple, unpacking raises ValueError:
x, = () Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: not enough values to unpack (expected 1, got 0) x, = (1, 2) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: too many values to unpack (expected 1)
But if the iterator raises ValueError, there's no way to tell it apart from the unpacking:
def foo(): ... yield None ... raise ValueError ... foo() <generator object foo at 0x7fa0e70e6430> x = foo() x, = foo() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in foo ValueError
And the workaround for this is a bit ugly. We already convert e.g. StopIteration into RuntimeError in many cases, why can't we do so here too? For backwards compatibility, this should probably be an itertools utility tho.
[Muchly snipped] On 09/04/2020 12:27, Soni L. wrote:
To put it simple, unpacking raises ValueError:
But if the iterator raises ValueError, there's no way to tell it apart from the unpacking:
I don't see how this is any different from any other case when you get the same exception for different errors. If for some reason you really care, subclass ValueError to make a finer-grained exception.
And the workaround for this is a bit ugly. We already convert e.g. StopIteration into RuntimeError in many cases, why can't we do so here too?
Surely the correct "workaround" is not to do the thing that raises the exception? -- Rhodri James *-* Kynesim Ltd
On 2020-04-09 8:48 a.m., Rhodri James wrote:
[Muchly snipped] On 09/04/2020 12:27, Soni L. wrote:
To put it simple, unpacking raises ValueError:
But if the iterator raises ValueError, there's no way to tell it apart from the unpacking:
I don't see how this is any different from any other case when you get the same exception for different errors. If for some reason you really care, subclass ValueError to make a finer-grained exception.
And the workaround for this is a bit ugly. We already convert e.g. StopIteration into RuntimeError in many cases, why can't we do so here too?
Surely the correct "workaround" is not to do the thing that raises the exception?
Technically, I consider it a bug that bugs can shadow API-level exceptions. Any API defining API-level exceptions must carefully control the exceptions it raises. In this case I'd like to use the ValueError from the iterator unpacking API, on my API. But since the iterator unpacking API doesn't control its exceptions, this just does a great job of masking bugs in the code instead. Equally, anything doing computation in __get__ should not propagate LookupError except where explicitly intended. And that's not how those things are often implemented, unfortunately. There's a reason ppl complain so much about certain frameworks "eating all the exceptions". They use exceptions as part of their API but let user code raise those API-level exceptions, which, because they're part of the API, get handled somewhere.
Soni L. wrote:
[Muchly snipped] On 09/04/2020 12:27, Soni L. wrote: To put it simple, unpacking raises ValueError: But if the iterator raises ValueError, there's no way to tell it apart from the unpacking: I don't see how this is any different from any other case when you get the same exception for different errors. If for some reason you really care, subclass ValueError to make a finer-grained exception. And the workaround for this is a bit ugly. We already convert e.g. StopIteration into RuntimeError in many cases, why can't we do so here too? Surely the correct "workaround" is not to do the thing that raises the exception? Technically, I consider it a bug that bugs can shadow API-level exceptions. Any API defining API-level exceptions must carefully control the exceptions it raises. In this case I'd like to use the ValueError from the iterator unpacking API, on my API. But since the iterator unpacking API doesn't control its exceptions, this just does a great job of masking bugs in the code instead. Equally, anything doing computation in __get__ should not propagate LookupError except where explicitly intended. And that's not how those
On 2020-04-09 8:48 a.m., Rhodri James wrote: things are often implemented, unfortunately. There's a reason ppl complain so much about certain frameworks "eating all the exceptions". They use exceptions as part of their API but let user code raise those API-level exceptions, which, because they're part of the API, get handled somewhere.
Strictly speaking, there is any unpackaging error in your example. Your example raises its own ValueError before any unpackaging error is raised. Indeed ``` x, y = foo() ``` also raises your own `ValueError` and there is any unpackaging error involved. On the other hand, an alternative design that does not raise any exception, does raise the proper unpackaging exception. For instance: ``` def foo(): yield True return x = foo() x, = foo() x, y = foo() ``` outputs: ``` Traceback (most recent call last): File "...", line 7, in <module> x, y = foo() ValueError: not enough values to unpack (expected 2, got 1) ``` So, IMHO, you are mixing two different things here. Am I wrong? Are you talking about something different? Thank you.
On Thu, Apr 09, 2020 at 08:27:14AM -0300, Soni L. wrote:
To put it simple, unpacking raises ValueError: [...] But if the iterator raises ValueError, there's no way to tell it apart from the unpacking:
def foo(): ... yield None ... raise ValueError
You could start by reading the error message and the traceback, that will usually make it clear the nature of the value error, and where it occurred (in the generator, or where the generator was consumed). For *debugging purposes* this is usually sufficient: the person reading the exception can tell the difference between an unpacking error: # outside the iterator a, b, c = iterator ValueError: not enough values to unpack (expected 3, got 1) and some other error: # inside the iterator yield int('aaa') ValueError: invalid literal for int() with base 10: 'aaa' There may be rare cases where it is difficult to tell. Perhaps the traceback is missing, or you are debugging a byte-code only library, or obfuscated code, say. But these are rare cases, and we don't have to solve those problems in the language. Where this is not sufficient is for error recovery: try: a, b, c = iterator except ValueError: recover() However, this is also true for every exception that Python might raise. There is no absolutely foolproof solution, but it is usually good enough to e.g.: - include as little as possible inside the `try` block; - carefully audit the contents of the `try` block to ensure it cannot raise the exception you want to catch; - wrap the iterator in something that will convert ValueError to another exception. At one point some years ago I attempted to write an "Exception Guard" object that caught an exception and re-raised it as another exception, so you could do this: guard = ExceptionGuard(catch=ValueError, throw=RuntimeError) try: a, b, c = guard(iterator) except ValueError: print('unpacking error') except RuntimeError: print('ValueError in iterator caught and coverted') but it collapsed under the weight of over-engineering and trying to support versions of Python back to 2.4, and I abandoned it. Perhaps you will have better luck, and when you have it as a mature, working object, you can propose it for the standard library. -- Steven
participants (4)
-
jdveiga@gmail.com
-
Rhodri James
-
Soni L.
-
Steven D'Aprano