extended for-else, extended continue, and a rant about zip()
I wanna propose making generators even weirder! so, extended continue is an oldie: https://www.python.org/dev/peps/pep-0342/#the-extended-continue-statement it'd allow one to turn: yield from foo into: for bar in foo: continue (yield bar) but what's this extended for-else? well, currently you have for-else: for x, y, z in zip(a, b, c): ... else: pass and this works. you get the stuff from the iterators, and if you break the loop, the else doesn't run. the else basically behaves like "except StopIteration:"... so I propose an extended for-else, that behaves like "except StopIteration as foo:". that is, assuming we could get a zip() that returns partial results in the StopIteration (see other threads), we could do: for x, y, z in zip(a, b, c): do_stuff_with(x, y, z) else as partial_xy: if len(partial_xy) == 0: x = dummy try: y = next(b) except StopIteration: y = dummy try: z = next(c) except StopIteration: z = dummy if (x, y, z) != (dummy, dummy dummy): do_stuff_with(x, y, z) if len(partial_xy) == 1: x, = partial_xy y = dummy try: z = next(c) except StopIteration: z = dummy do_stuff_with(x, y, z) if len(partial_xy) == 2: x, y = partial_xy z = dummy do_stuff_with(x, y, z) (this example is better served by zip_longest. however, it's nevertheless a good way to demonstrate functionality, thanks to zip_longest's (and zip's) trivial/easy to understand behaviour.) this would enable one to turn: return yield from foo into: for bar in foo: continue (yield bar) else as baz: return baz allowing one to pick apart and modify the yielded and sent parts, while still getting access to the return values. currently if you have an arbitrary generator, you can't modify the yielded values without breaking send or return. in fact you can either pass-through yield and send and collect the return, or modify yield and forget about send and the return, or use very ugly syntax that makes everything look lower-level than it should, to be able to pick apart everything: try: bar = next(foo) except StopIteration as exc: baz = exc.value else: while True: try: bar = foo.send((yield bar)) except StopIteration as exc: baz = exc.value break return baz (this is exactly equivalent (if I didn't overlook anything) to the previous endearingly simple loop with extended for-else and extended continue)
On Apr 27, 2020, at 12:49, Soni L.
I wanna propose making generators even weirder!
Why? Most people would consider that a negative, not a positive. Even if you demonstrate some useful functionality with realistic examples that benefit from it, all you’ve done here is set the bar higher for yourself to convince anyone that your change is worth it.
so, extended continue is an oldie: https://www.python.org/dev/peps/pep-0342/#the-extended-continue-statement
it'd allow one to turn:
yield from foo
into:
for bar in foo: continue (yield bar)
And what’s the advantage of that? It’s a lot more verbose, harder to read, probably easier to get wrong, and presumably less efficient. If this is your best argument for why we should revisit an old rejected idea, it’s not a very good one. (If you’re accepting that it’s a pointless feature on its own but proposing it because, together with your other proposed new feature, it would no longer be pointless, then say that, don’t offer an obviously bad argument for it on its own.)
but what's this extended for-else? well, currently you have for-else:
for x, y, z in zip(a, b, c): ... else: pass
and this works. you get the stuff from the iterators, and if you break the loop, the else doesn't run. the else basically behaves like "except StopIteration:"...
so I propose an extended for-else, that behaves like "except StopIteration as foo:". that is, assuming we could get a zip() that returns partial results in the StopIteration (see other threads), we could do:
for x, y, z in zip(a, b, c): do_stuff_with(x, y, z) else as partial_xy: if len(partial_xy) == 0: x = dummy try: y = next(b) except StopIteration: y = dummy try: z = next(c) except StopIteration: z = dummy if (x, y, z) != (dummy, dummy dummy): do_stuff_with(x, y, z) if len(partial_xy) == 1: x, = partial_xy y = dummy try: z = next(c) except StopIteration: z = dummy do_stuff_with(x, y, z) if len(partial_xy) == 2: x, y = partial_xy z = dummy do_stuff_with(x, y, z)
(this example is better served by zip_longest. however, it's nevertheless a good way to demonstrate functionality, thanks to zip_longest's (and zip's) trivial/easy to understand behaviour.)
Would it always be this complicated and verbose to use this feature? I mean, compare it to the “roughly equivalent” zip_longest in the docs, which is a lot shorter, easier to understand, harder to get wrong, and more flexible (e.g., it works unchanged with any number of iterables, while yours to had to rewritten for any different number of iterables because it requires N! chunks of explicit boilerplate). Are there any examples where it lets you do something useful that can’t be done with existing features, so it’s actually worth learning this weird new feature and requiring Python 3.10+ and writing 22 lines of extra code? Even if there is such an example, if the code to deal with the post-for state is 11x as long and complicated as the for loop and can’t be easily simplified or abstracted, is the benefit of using a for loop instead of manually nexting iterators still a net benefit? I don’t know that manually nexting the iterators will always avoid the problem, but it certainly often is (again, look at many of the equivalents in the itertools docs that do it), and it definitely is in your emulating-zip_longest example, and that’s the only example you’ve offered. Also notice that many cases like this can be trivially solved by a simple peekable or unnextable (I believe more-itertools has both, and the first one is a recipe in itertools too, but I can’t remember the names they use; if not, they’re really easy to write) or tee. We don’t even need any of that for your example, but if you can actually come up with another example, make sure it isn’t already doable a lot more simply with peekable/etc.
this would enable one to turn:
return yield from foo
into:
for bar in foo: continue (yield bar) else as baz: return baz
allowing one to pick apart and modify the yielded and sent parts, while still getting access to the return values.
Again, this is letting you turn something simple into something more complicated, and it’s not at all clear why you want to do that. What exactly are you trying to pick apart that makes that necessary, that can’t be written better today? I’ll grant that writing something fully general that supports all the different things that could be theoretically done with your desired feature requires the ugly mess that you posted below. (I’m not sure it’s true, but it seems at least possible, so let’s go with it.) But that doesn’t mean that writing something appropriate for any particular realistic example requires that. And without seeing any such examples, or even getting a vague description of them, nobody has any reason to believe there’s an actual problem to be solved for any of them. While we’re at it: what does else as do when you’re looping over a sequence until IndexError instead of looping over an iterator? What does it do on while loops? What does it do even on for loops over an iterator whose StopIteration has no value? Also, unless you have some additional proposal that you haven’t mentioned here, it seems like as soon as you add any further transformation after the zip (a comprehension, many uses of map, most functions out of itertools or a third-party library, …) you’ve either lost the magic StopIteration value or made it incorrect. To demonstrate what I mean: def gen(): yield 1 return 2 it = gen() next(it) # yields 1 next(it) # raises StopIteration(2) it = (x*3 for x in gen()) next(it) # yields 1*3 == 3 next(it) # raises StopIteration() it = chain([0], gen()) next(it) # yields 0 next(it) # yields 1 next(it) # raises StopIteration() Most places where you need to do fancy stuff with iteration, you expect to be able to transform iterators in ways like this without breaking everything. But your new feature won’t allow that. It can only be used if you’re directly iterating on the result of zip. Which seems to imply there probably aren’t any good use cases at all other than ones you can already handle better with zip_longest, in which case what’s the point of any of this?
On 2020-04-27 6:11 p.m., Andrew Barnert wrote:
On Apr 27, 2020, at 12:49, Soni L.
wrote: I wanna propose making generators even weirder!
Why? Most people would consider that a negative, not a positive. Even if you demonstrate some useful functionality with realistic examples that benefit from it, all you’ve done here is set the bar higher for yourself to convince anyone that your change is worth it.
so, extended continue is an oldie: https://www.python.org/dev/peps/pep-0342/#the-extended-continue-statement
it'd allow one to turn:
yield from foo
into:
for bar in foo: continue (yield bar)
And what’s the advantage of that? It’s a lot more verbose, harder to read, probably easier to get wrong, and presumably less efficient. If this is your best argument for why we should revisit an old rejected idea, it’s not a very good one.
(If you’re accepting that it’s a pointless feature on its own but proposing it because, together with your other proposed new feature, it would no longer be pointless, then say that, don’t offer an obviously bad argument for it on its own.)
but what's this extended for-else? well, currently you have for-else:
for x, y, z in zip(a, b, c): ... else: pass
and this works. you get the stuff from the iterators, and if you break the loop, the else doesn't run. the else basically behaves like "except StopIteration:"...
so I propose an extended for-else, that behaves like "except StopIteration as foo:". that is, assuming we could get a zip() that returns partial results in the StopIteration (see other threads), we could do:
for x, y, z in zip(a, b, c): do_stuff_with(x, y, z) else as partial_xy: if len(partial_xy) == 0: x = dummy try: y = next(b) except StopIteration: y = dummy try: z = next(c) except StopIteration: z = dummy if (x, y, z) != (dummy, dummy dummy): do_stuff_with(x, y, z) if len(partial_xy) == 1: x, = partial_xy y = dummy try: z = next(c) except StopIteration: z = dummy do_stuff_with(x, y, z) if len(partial_xy) == 2: x, y = partial_xy z = dummy do_stuff_with(x, y, z)
(this example is better served by zip_longest. however, it's nevertheless a good way to demonstrate functionality, thanks to zip_longest's (and zip's) trivial/easy to understand behaviour.)
Would it always be this complicated and verbose to use this feature? I mean, compare it to the “roughly equivalent” zip_longest in the docs, which is a lot shorter, easier to understand, harder to get wrong, and more flexible (e.g., it works unchanged with any number of iterables, while yours to had to rewritten for any different number of iterables because it requires N! chunks of explicit boilerplate).
Are there any examples where it lets you do something useful that can’t be done with existing features, so it’s actually worth learning this weird new feature and requiring Python 3.10+ and writing 22 lines of extra code?
Even if there is such an example, if the code to deal with the post-for state is 11x as long and complicated as the for loop and can’t be easily simplified or abstracted, is the benefit of using a for loop instead of manually nexting iterators still a net benefit? I don’t know that manually nexting the iterators will always avoid the problem, but it certainly often is (again, look at many of the equivalents in the itertools docs that do it), and it definitely is in your emulating-zip_longest example, and that’s the only example you’ve offered.
Also notice that many cases like this can be trivially solved by a simple peekable or unnextable (I believe more-itertools has both, and the first one is a recipe in itertools too, but I can’t remember the names they use; if not, they’re really easy to write) or tee. We don’t even need any of that for your example, but if you can actually come up with another example, make sure it isn’t already doable a lot more simply with peekable/etc.
this would enable one to turn:
return yield from foo
into:
for bar in foo: continue (yield bar) else as baz: return baz
allowing one to pick apart and modify the yielded and sent parts, while still getting access to the return values.
Again, this is letting you turn something simple into something more complicated, and it’s not at all clear why you want to do that. What exactly are you trying to pick apart that makes that necessary, that can’t be written better today?
I’ll grant that writing something fully general that supports all the different things that could be theoretically done with your desired feature requires the ugly mess that you posted below. (I’m not sure it’s true, but it seems at least possible, so let’s go with it.) But that doesn’t mean that writing something appropriate for any particular realistic example requires that. And without seeing any such examples, or even getting a vague description of them, nobody has any reason to believe there’s an actual problem to be solved for any of them.
While we’re at it: what does else as do when you’re looping over a sequence until IndexError instead of looping over an iterator? What does it do on while loops? What does it do even on for loops over an iterator whose StopIteration has no value?
Also, unless you have some additional proposal that you haven’t mentioned here, it seems like as soon as you add any further transformation after the zip (a comprehension, many uses of map, most functions out of itertools or a third-party library, …) you’ve either lost the magic StopIteration value or made it incorrect. To demonstrate what I mean:
def gen(): yield 1 return 2 it = gen() next(it) # yields 1 next(it) # raises StopIteration(2)
it = (x*3 for x in gen()) next(it) # yields 1*3 == 3 next(it) # raises StopIteration()
it = chain([0], gen()) next(it) # yields 0 next(it) # yields 1 next(it) # raises StopIteration()
Most places where you need to do fancy stuff with iteration, you expect to be able to transform iterators in ways like this without breaking everything. But your new feature won’t allow that. It can only be used if you’re directly iterating on the result of zip. Which seems to imply there probably aren’t any good use cases at all other than ones you can already handle better with zip_longest, in which case what’s the point of any of this?
The explicit case for zip is if you *don't* want it to consume anything after the stop. btw: I suggest reading the whole post as one rather than trying to pick it apart. the purpose of the proposal, as a whole, is to make it easier to pick things - generators in particular - apart. I tried to make that clear but clearly I failed. - - - Side note, here's one case where it'd be better than using zip_longest: for a, b, c, d, e, f, g in zip(*[iter(x)]*7): # this pattern is suggested by the zip() docs, btw. use_7x_algorithm(a, b, c, d, e, f, g) else as x: # leftovers that didn't fit the 7-tuple. use_slow_variable_arity_algorithm(*x) I haven't found a real use-case for this yet, tho. SIMD is handled by numpy, which does a better job than you could ever hope for in plain python, and for SIMD you could use zip_longest with a suitable dummy instead. but... yeah, not really useful. (actually: why do the docs for zip() even suggest this stuff anyway? seems like something nobody would actually use.)
the point of posting here is to find other use-cases! there's a reason this isn't a PEP! On 2020-04-27 7:14 p.m., Chris Angelico wrote:
On Tue, Apr 28, 2020 at 7:41 AM Soni L.
wrote: I haven't found a real use-case for this yet, tho.
So.... maybe hold off until you have one?
ChrisA _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/6PP3NQ... Code of Conduct: http://python.org/psf/codeofconduct/
I doubt that you will find many people on this list who are willing to do your homework for you. It will be very hard to convince most of the people on this list if the only reason you can give is "I think this would look great". Great ideas are based on real needs, not on flights of fancy. On 4/27/2020 7:15 PM, Soni L. wrote:
the point of posting here is to find other use-cases! there's a reason this isn't a PEP!
On 2020-04-27 7:14 p.m., Chris Angelico wrote:
On Tue, Apr 28, 2020 at 7:41 AM Soni L.
wrote: I haven't found a real use-case for this yet, tho.
So.... maybe hold off until you have one?
ChrisA _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/6PP3NQ... Code of Conduct: http://python.org/psf/codeofconduct/
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/F5HFUO... Code of Conduct: http://python.org/psf/codeofconduct/
the point of posting here is that someone else may have a similar existing use-case where this would make things better. I can't take a look at proprietary code so I post about stuff in the hopes that the ppl who can will back this stuff up. (doesn't proprietary software make things so much harder? :/) I don't want ppl to do homework for me. I want to access the inaccessible. (but also yes, it'd be cool if this list was about cooperating on ideas.) On 2020-04-27 8:26 p.m., Edwin Zimmerman wrote:
I doubt that you will find many people on this list who are willing to do your homework for you. It will be very hard to convince most of the people on this list if the only reason you can give is "I think this would look great". Great ideas are based on real needs, not on flights of fancy.
On 4/27/2020 7:15 PM, Soni L. wrote:
the point of posting here is to find other use-cases! there's a reason this isn't a PEP!
On 2020-04-27 7:14 p.m., Chris Angelico wrote:
On Tue, Apr 28, 2020 at 7:41 AM Soni L.
wrote: I haven't found a real use-case for this yet, tho.
So.... maybe hold off until you have one?
ChrisA _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/6PP3NQ... Code of Conduct: http://python.org/psf/codeofconduct/
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/F5HFUO... Code of Conduct: http://python.org/psf/codeofconduct/
On Apr 27, 2020, at 16:35, Soni L.
the point of posting here is that someone else may have a similar existing use-case
Similar to *what*? It can’t be similar to your use case if you don’t have a use case for it to be similar to. If you really can’t imagine why something might be useful, and nobody else has ever asked for it, it probably isn’t actually needed. Sure, there are rare exceptions to that, but that shouldn’t be your default assumption for everything that could ever conceivably be done.
where this would make things better. I can't take a look at proprietary code so I post about stuff in the hopes that the ppl who can will back this stuff up.
(doesn't proprietary software make things so much harder? :/)
A little bit, but not nearly as much as you seem to be thinking. There are zillions of lines of open source Python code easily searchable. There may be a few kinds of problems that are likely to only come up in proprietary code, but something generic like this is just as likely to be useful to Django or MusicBrainz or Jupyter or DNF or even the Python stdlib as to some internal Dropbox service or the guts of the Civ V scripting engine. So the fact that you can’t search the Dropbox or Firaxis source is not actually a big problem.
On Tue, Apr 28, 2020 at 9:16 AM Soni L.
the point of posting here is to find other use-cases! there's a reason this isn't a PEP!
Once again, a massive misunderstanding of the PEP process. Ideas have to actually be worth pursuing long before they become PEPs, and not all ideas need PEP documents. Please, figure out what the idea is *first*, and don't even post it here unless there is an actual idea that you're presenting - and that means an actual use-case. Otherwise, all you're doing is making more and more of us start out by reading the 'From' name before we consider the actual content of the proposal. ChrisA
On Apr 27, 2020, at 14:38, Soni L.
The explicit case for zip is if you *don't* want it to consume anything after the stop.
Sure, but *when do you want that*? What’s an example of code you want to write that would be more readable, or easier to write, or whatever, if you could work around consuming anything after the stop?
btw: I suggest reading the whole post as one rather than trying to pick it apart.
I did read the whole post, and then went back to reply to each part in-line. You can tell by the fact that I refer to things later in the post. For example, when I refer to your proposed code being better than “the ugly mess that you posted below“ as the current alternative, it should be pretty clear that I’ve already read the ugly mess that you posted below. So why did I format it as replies inline? Because that’s standard netiquette that goes back to the earliest days of email lists. Most people find it confusing (and sometimes annoying) to read a giant quote and then a giant reply and try to figure out what’s being referred to where, so when you have a giant message to reply to, it’s helpful to reply inline. But as a bonus, writing a reply that way makes it clear to yourself if you’ve left out anything important. You didn’t reply to multiple issues that I raised, and I doubt that it’s because you don’t have any answers and are just trying to hide that fact to trick people into accepting your proposal anyway, but rather than you just forgot to get to some things because it’s easy to miss important stuff when you’re not replying inline.
the purpose of the proposal, as a whole, is to make it easier to pick things - generators in particular - apart. I tried to make that clear but clearly I failed.
No, you did make that part clear; what you didn’t make clear is (a) what exactly you’re trying to pick apart from the generators and why, (b) what actual problems look like, (c) how your proposal could make that code better, and (d) why existing solutions (like manually nexting iterators in a while loop, or using tools like peekable) don’t already solve the problem. Without any of that, all you’re doing is offering something abstract that might conceivably be useful, but it’s not clear where or why or even whether it would ever come up, so for all we know it’ll *never* actually be useful. Nobody’s likely to get on board with such a change.
Side note, here's one case where it'd be better than using zip_longest:
Your motivating example should not be a “side note”, it should be the core of any proposal. But it should also be a real example, not a meaningless toy example. Especially not one where even you can’t imagine an actual similar use case. “We should add this feature because it would let you write code that I can’t imagine ever wanting to write” isn’t a rationale that’s going to attract much support.
for a, b, c, d, e, f, g in zip(*[iter(x)]*7): # this pattern is suggested by the zip() docs, btw. use_7x_algorithm(a, b, c, d, e, f, g) else as x: # leftovers that didn't fit the 7-tuple. use_slow_variable_arity_algorithm(*x)
Why do you want to unpack into 7 variables with meaningless names just to pass those 7 variables? And if you don’t need that part, why can’t you just write this with zip_skip (which, as mentioned in the other thread, is pretty easy to write around zip_longest)? The best guess I can come up with is that in a real life example maybe that would have some performance cost that’s hard to see in this toy. But then if that’s the case, given that x is clearly not an iterator, is it a sequence? You could then presumably get much more optimization by looping over slices instead of using the grouper idiom in the first place. Or, as you say, by using numpy.
I haven't found a real use-case for this yet, tho. SIMD is handled by numpy, which does a better job than you could ever hope for in plain python, and for SIMD you could use zip_longest with a suitable dummy instead. but... yeah, not really useful.
(actually: why do the docs for zip() even suggest this stuff anyway? seems like something nobody would actually use.)
That grouping idiom is useful for all kinds of things that _aren’t_ about optimization. Maybe the zip docs aren’t the best place for it (but it’s also in the itertools recipes, which probably is the best place for it), but it’s definitely useful. In fact, I used it less than a week ago. We’ve got this tool that writes a bunch of 4-line files, and someone concatenated a bunch of them together and wrote this horrible code to pull them back apart in another language I won’t mention here, and rather than debug their code, I just rewrote it in Python like this: with open(path) as f: for entry in chunkify(f, 4): process(entry) I used a function called chunkify because I think that’s a lot easier to understand (especially for colleagues who don’t use Python very often), and we already had it lying around in a utils module, but it’s just implemented as zip(*[iter(it)]*n). Also, compare this other example for processing a different file format: with open(path) as f: for entry in split(f, '\n'): process(entry) It’s pretty obvious what the difference is here: one is reading entries that are groups of 4 lines; the other is reading entries that are groups of arbitrary numbers of lines but separated by blank lines. At most you might need to look at the help for chunkify and split to be absolutely sure they mean what you think they mean. (Although maybe I should have used functions from more-itertools rather than our own custom functions that do effectively the same thing but are kind of weird and probably not so well tested and whose names don’t come up in a web search.)
On 2020-04-27 8:37 p.m., Andrew Barnert wrote:
On Apr 27, 2020, at 14:38, Soni L.
wrote: [snipping a long unanswered reply]
The explicit case for zip is if you *don't* want it to consume anything after the stop.
Sure, but *when do you want that*? What’s an example of code you want to write that would be more readable, or easier to write, or whatever, if you could work around consuming anything after the stop?
so here's one example, let's say you want to iterate multiple things (like with zip), get a count out of it, as well as partially consume an external iterator without swallowing any extra values from it. it'd look something like this: def foo(self, other_things): for x in zip(range(sys.maxsize), self.my_things, other_things): do_stuff else as y: return y[0] # count using extended for-else + partial-zip. it stops as soon as self.my_things stops. and then the caller can do whatever else it needs with other_things. (altho maybe it's considered unpythonic to reuse iterators like this? I like it tho.)
btw: I suggest reading the whole post as one rather than trying to pick it apart.
I did read the whole post, and then went back to reply to each part in-line. You can tell by the fact that I refer to things later in the post. For example, when I refer to your proposed code being better than “the ugly mess that you posted below“ as the current alternative, it should be pretty clear that I’ve already read the ugly mess that you posted below.
So why did I format it as replies inline? Because that’s standard netiquette that goes back to the earliest days of email lists. Most people find it confusing (and sometimes annoying) to read a giant quote and then a giant reply and try to figure out what’s being referred to where, so when you have a giant message to reply to, it’s helpful to reply inline.
But as a bonus, writing a reply that way makes it clear to yourself if you’ve left out anything important. You didn’t reply to multiple issues that I raised, and I doubt that it’s because you don’t have any answers and are just trying to hide that fact to trick people into accepting your proposal anyway, but rather than you just forgot to get to some things because it’s easy to miss important stuff when you’re not replying inline.
you kept bringing up how I should talk about things first and break them down, rather than build them up and expand on them as the post goes on. I prefer the latter. I don't mind inline replies, and in fact I prefer them (altho I'm not always great at that), and that's not what I raise an issue with.
the purpose of the proposal, as a whole, is to make it easier to pick things - generators in particular - apart. I tried to make that clear but clearly I failed.
No, you did make that part clear; what you didn’t make clear is (a) what exactly you’re trying to pick apart from the generators and why, (b) what actual problems look like, (c) how your proposal could make that code better, and (d) why existing solutions (like manually nexting iterators in a while loop, or using tools like peekable) don’t already solve the problem.
Without any of that, all you’re doing is offering something abstract that might conceivably be useful, but it’s not clear where or why or even whether it would ever come up, so for all we know it’ll *never* actually be useful. Nobody’s likely to get on board with such a change.
Side note, here's one case where it'd be better than using zip_longest:
Your motivating example should not be a “side note”, it should be the core of any proposal.
that is not my motivating example. if anything my motivating example is because I wanna do some very unpythonic things. like this: for x in things: yield Wrap(x) else with y: yield y return len(things) and then we nest this and we get a nice wrap of wraps wrapped in wraps with lengths at the end. why? ... because I want it to work like this, tbh. .-.
But it should also be a real example, not a meaningless toy example. Especially not one where even you can’t imagine an actual similar use case. “We should add this feature because it would let you write code that I can’t imagine ever wanting to write” isn’t a rationale that’s going to attract much support.
for a, b, c, d, e, f, g in zip(*[iter(x)]*7): # this pattern is suggested by the zip() docs, btw. use_7x_algorithm(a, b, c, d, e, f, g) else as x: # leftovers that didn't fit the 7-tuple. use_slow_variable_arity_algorithm(*x)
Why do you want to unpack into 7 variables with meaningless names just to pass those 7 variables? And if you don’t need that part, why can’t you just write this with zip_skip (which, as mentioned in the other thread, is pretty easy to write around zip_longest)?
The best guess I can come up with is that in a real life example maybe that would have some performance cost that’s hard to see in this toy. But then if that’s the case, given that x is clearly not an iterator, is it a sequence? You could then presumably get much more optimization by looping over slices instead of using the grouper idiom in the first place. Or, as you say, by using numpy.
I haven't found a real use-case for this yet, tho. SIMD is handled by numpy, which does a better job than you could ever hope for in plain python, and for SIMD you could use zip_longest with a suitable dummy instead. but... yeah, not really useful.
(actually: why do the docs for zip() even suggest this stuff anyway? seems like something nobody would actually use.)
That grouping idiom is useful for all kinds of things that _aren’t_ about optimization. Maybe the zip docs aren’t the best place for it (but it’s also in the itertools recipes, which probably is the best place for it), but it’s definitely useful. In fact, I used it less than a week ago. We’ve got this tool that writes a bunch of 4-line files, and someone concatenated a bunch of them together and wrote this horrible code to pull them back apart in another language I won’t mention here, and rather than debug their code, I just rewrote it in Python like this:
with open(path) as f: for entry in chunkify(f, 4): process(entry)
I used a function called chunkify because I think that’s a lot easier to understand (especially for colleagues who don’t use Python very often), and we already had it lying around in a utils module, but it’s just implemented as zip(*[iter(it)]*n).
see: why are we perfectly happy with ignoring extra lines at the end? an "else" would serve you well, even if it's just to "assert len(remaining) == 0". but we can't do that, can we? because zip swallows the extras. :/
Also, compare this other example for processing a different file format:
with open(path) as f: for entry in split(f, '\n'): process(entry)
It’s pretty obvious what the difference is here: one is reading entries that are groups of 4 lines; the other is reading entries that are groups of arbitrary numbers of lines but separated by blank lines. At most you might need to look at the help for chunkify and split to be absolutely sure they mean what you think they mean. (Although maybe I should have used functions from more-itertools rather than our own custom functions that do effectively the same thing but are kind of weird and probably not so well tested and whose names don’t come up in a web search.)
and... well I'm assuming this one just yields the extras at the end of the file/iterator? (I hope? or maybe it'd also benefit from an "else", even if it was just an assert.) (and yeah this does make me uncomfortable. *please* verify your data! I learned this from rust tbh but I apply it everywhere.)
On Apr 27, 2020, at 17:01, Soni L.
On 2020-04-27 8:37 p.m., Andrew Barnert wrote: On Apr 27, 2020, at 14:38, Soni L.
wrote: [snipping a long unanswered reply] The explicit case for zip is if you *don't* want it to consume anything after the stop. Sure, but *when do you want that*? What’s an example of code you want to write that would be more readable, or easier to write, or whatever, if you could work around consuming anything after the stop? so here's one example, let's say you want to iterate multiple things (like with zip), get a count out of it, as well as partially consume an external iterator without swallowing any extra values from it.
What do you want to do that for? This still isn’t a concrete use case, so it’s still not much more of a rationale than “let’s say you want to intermingle the bits of two 16-bit integers into a 32-bit integer”. Sure, that’s something that’s easy to do in some other languages (it’s the builtin $ operator in INTERCAL) but very hard to do readably or efficiently in Python. If we added a $ operator with a __bigmoney__ protocol and made int.__bigmoney__ implement this operation in C, that would definitely solve the problem. But it’s only worth proposing that solution if anyone actually needs a solution to the problem in the first place. When’s the last time anyone ever needed to efficiently intermingle bits? (Except in INTERCAL, where the language intentionally leaves out useful operators like +, |, and << and even 32-bit literals to force you to write things in clever ways around $ and ~ instead). On top of that, this abstract example you want can already be written today.
it'd look something like this:
def foo(self, other_things): for x in zip(range(sys.maxsize), self.my_things, other_things): do_stuff else as y: return y[0] # count
using extended for-else + partial-zip. it stops as soon as self.my_things stops. and then the caller can do whatever else it needs with other_things. (altho maybe it's considered unpythonic to reuse iterators like this? I like it tho.)
Here are four ways of doing this today: def foo(self, other_things): for x in zip(count(1), self.my_things, other_things): do_stuff return x[0] def foo(self, other_things): c = count(-1) for x in zip(c, self.my_things, other_things): do_stuff return next(c) def foo(self, other_things): c = count() for x in zip(self.my_things, other_things, c): do_stuff return next(c) def foo(self, other_things): c = lastable(count()) for x in zip(c, self.my_things, other_things): do_stuff return c.last So, why do we need another way to do something that’s probably pretty uncommon and can already be done pretty easily? Especially if that new way isn’t more readable or more powerful?
if anything my motivating example is because I wanna do some very unpythonic things.
Then you should have given that example in the first place. Sure, the fact that it’s unpythonic might mean it’s not very convincing, but it doesn’t become more convincing after multiple people have to go back and forth to drag it out of you. All that means is that everyone else has already tuned out and won’t even see your example, so your proposal has basically zero chance instead of whatever chance it should have had. And sometimes unpythonic things really do get into the language—sometimes because they’re just so useful, but more often, because they point to a reason for changing what everyone’s definition of “pythonic” is. Think of the abc module. Or, better, if you can dig up the 3.1-era vs. 3.3-era threads on the original coroutine PEP 3152, you can see how the consensus changed from “wtf, that doesn’t look like Python at all and nobody will ever understand it” to “this is obviously the pythonic way to write reactors (modulo a bunch of bikeshedding)”. That wouldn’t have happened if Greg Ewing had refused to tell anyone that he wanted coroutines to provide a better, if unfamiliar, way to write things like reactors, and instead tried to come up with less-unpythonic-looking but completely useless examples.
That grouping idiom is useful for all kinds of things that _aren’t_ about optimization. Maybe the zip docs aren’t the best place for it (but it’s also in the itertools recipes, which probably is the best place for it), but it’s definitely useful. In fact, I used it less than a week ago. We’ve got this tool that writes a bunch of 4-line files, and someone concatenated a bunch of them together and wrote this horrible code to pull them back apart in another language I won’t mention here, and rather than debug their code, I just rewrote it in Python like this: with open(path) as f: for entry in chunkify(f, 4): process(entry) I used a function called chunkify because I think that’s a lot easier to understand (especially for colleagues who don’t use Python very often), and we already had it lying around in a utils module, but it’s just implemented as zip(*[iter(it)]*n).
see: why are we perfectly happy with ignoring extra lines at the end?
Because there aren’t any. The file was made by catting together 2022 4-line files, so it’s 8088 lines long. It will always be 8088 lines long. If I really thought that was important to check, surely I’d want to check 8088 rather than just divisible by 4. But I didn’t think it was worth checking either of those—or that the text is pure ASCII, or that the newlines are \n, etc. For a more general purpose script (especially if it had to accept input from potentially stupid or malicious end users and produce useful error responses instead of just punting), I would have checked many of those things and more, but for this script, it wasn’t worth it.
an "else" would serve you well, even if it's just to "assert len(remaining) == 0". but we can't do that, can we? because zip swallows the extras. :/
Sure we can, because the exact same grouper idiom works just as well with zip_equal (which is available in more-itertools, and really easy to write yourself around zip_longest, even if the other thread attempting to add it to the stdlib fails) or zip_longest as with zip. If you understand the grouper idiom, the question “how do I check for a leftover partial group” is just obviously the same question as it is with every other use of zip. So if I wanted to check for exact multiples of 4, I would just use the same code but with zip_equal instead of zip (or wrap that up in a chunkify_equal and use that instead of chunkify), and it would raise a ValueError if there were leftover extras instead of swallowing them. So there’s no need for a new language feature here. An explicit test and assert that adds 2 lines of boilerplate to a 3-line function and obscured the main point of the function would be a worse solution than the one I can already write today. Even if you think Python should be doing more to encourage such checks, your proposal doesn’t help that at all—what you want is something like Serhiy’s proposal in the other thread (to eventually rename zip to zip_shortest and either get rid of plain zip or make it an alias for zip_equal).
Also, compare this other example for processing a different file format: with open(path) as f: for entry in split(f, '\n'): process(entry) It’s pretty obvious what the difference is here: one is reading entries that are groups of 4 lines; the other is reading entries that are groups of arbitrary numbers of lines but separated by blank lines. At most you might need to look at the help for chunkify and split to be absolutely sure they mean what you think they mean. (Although maybe I should have used functions from more-itertools rather than our own custom functions that do effectively the same thing but are kind of weird and probably not so well tested and whose names don’t come up in a web search.)
and... well I'm assuming this one just yields the extras at the end of the file/iterator?
No, because in this case it’s not even theoretically possible for there to be extras. It’s like asking what happens to the extra characters in str.split. By definition, there aren’t any—the last element is everything after the last separator, and there can never be anything left over after everything.
On 2020-04-28 12:28 a.m., Andrew Barnert wrote:
On Apr 27, 2020, at 17:01, Soni L.
wrote: On 2020-04-27 8:37 p.m., Andrew Barnert wrote: On Apr 27, 2020, at 14:38, Soni L.
wrote: [snipping a long unanswered reply] The explicit case for zip is if you *don't* want it to consume anything after the stop. Sure, but *when do you want that*? What’s an example of code you want to write that would be more readable, or easier to write, or whatever, if you could work around consuming anything after the stop? so here's one example, let's say you want to iterate multiple things (like with zip), get a count out of it, as well as partially consume an external iterator without swallowing any extra values from it.
What do you want to do that for? This still isn’t a concrete use case, so it’s still not much more of a rationale than “let’s say you want to intermingle the bits of two 16-bit integers into a 32-bit integer”. Sure, that’s something that’s easy to do in some other languages (it’s the builtin $ operator in INTERCAL) but very hard to do readably or efficiently in Python. If we added a $ operator with a __bigmoney__ protocol and made int.__bigmoney__ implement this operation in C, that would definitely solve the problem. But it’s only worth proposing that solution if anyone actually needs a solution to the problem in the first place. When’s the last time anyone ever needed to efficiently intermingle bits? (Except in INTERCAL, where the language intentionally leaves out useful operators like +, |, and << and even 32-bit literals to force you to write things in clever ways around $ and ~ instead).
(OT: Z-order curves. they're amazing.)
On top of that, this abstract example you want can already be written today.
it'd look something like this:
def foo(self, other_things): for x in zip(range(sys.maxsize), self.my_things, other_things): do_stuff else as y: return y[0] # count
using extended for-else + partial-zip. it stops as soon as self.my_things stops. and then the caller can do whatever else it needs with other_things. (altho maybe it's considered unpythonic to reuse iterators like this? I like it tho.)
Here are four ways of doing this today:
def foo(self, other_things): for x in zip(count(1), self.my_things, other_things): do_stuff return x[0]
def foo(self, other_things): c = count(-1) for x in zip(c, self.my_things, other_things): do_stuff return next(c)
def foo(self, other_things): c = count() for x in zip(self.my_things, other_things, c): do_stuff return next(c)
def foo(self, other_things): c = lastable(count()) for x in zip(c, self.my_things, other_things): do_stuff return c.last
So, why do we need another way to do something that’s probably pretty uncommon and can already be done pretty easily? Especially if that new way isn’t more readable or more powerful?
the only one with equivalent semantics is the last one.
if anything my motivating example is because I wanna do some very unpythonic things.
Then you should have given that example in the first place.
Sure, the fact that it’s unpythonic might mean it’s not very convincing, but it doesn’t become more convincing after multiple people have to go back and forth to drag it out of you. All that means is that everyone else has already tuned out and won’t even see your example, so your proposal has basically zero chance instead of whatever chance it should have had.
And sometimes unpythonic things really do get into the language—sometimes because they’re just so useful, but more often, because they point to a reason for changing what everyone’s definition of “pythonic” is. Think of the abc module. Or, better, if you can dig up the 3.1-era vs. 3.3-era threads on the original coroutine PEP 3152, you can see how the consensus changed from “wtf, that doesn’t look like Python at all and nobody will ever understand it” to “this is obviously the pythonic way to write reactors (modulo a bunch of bikeshedding)”. That wouldn’t have happened if Greg Ewing had refused to tell anyone that he wanted coroutines to provide a better, if unfamiliar, way to write things like reactors, and instead tried to come up with less-unpythonic-looking but completely useless examples.
tbh my particular case doesn't make a ton of practical sense. I have config files and there may be errors opening or deserializing them, and I have a system to manage configs and overrides. which means you can have multiple config files, and you may wanna log errors. you can also use a config manager as a config file in another config manager, which is where the error logging gets a bit weirder. I'm currently returning lists of errors, but another option would be to yield the errors instead. but, I'm actually not sure what the best approach here is. so yeah. I can't *use* that motivating example, because if I did, everyone would dismiss me as crazy. (which I am, but please don't dismiss me based on that :/)
That grouping idiom is useful for all kinds of things that _aren’t_ about optimization. Maybe the zip docs aren’t the best place for it (but it’s also in the itertools recipes, which probably is the best place for it), but it’s definitely useful. In fact, I used it less than a week ago. We’ve got this tool that writes a bunch of 4-line files, and someone concatenated a bunch of them together and wrote this horrible code to pull them back apart in another language I won’t mention here, and rather than debug their code, I just rewrote it in Python like this: with open(path) as f: for entry in chunkify(f, 4): process(entry) I used a function called chunkify because I think that’s a lot easier to understand (especially for colleagues who don’t use Python very often), and we already had it lying around in a utils module, but it’s just implemented as zip(*[iter(it)]*n).
see: why are we perfectly happy with ignoring extra lines at the end?
Because there aren’t any. The file was made by catting together 2022 4-line files, so it’s 8088 lines long. It will always be 8088 lines long. If I really thought that was important to check, surely I’d want to check 8088 rather than just divisible by 4. But I didn’t think it was worth checking either of those—or that the text is pure ASCII, or that the newlines are \n, etc. For a more general purpose script (especially if it had to accept input from potentially stupid or malicious end users and produce useful error responses instead of just punting), I would have checked many of those things and more, but for this script, it wasn’t worth it.
that's what assert is for - making assumptions that you know are correct now, but might not remain so in the future!
an "else" would serve you well, even if it's just to "assert len(remaining) == 0". but we can't do that, can we? because zip swallows the extras. :/
Sure we can, because the exact same grouper idiom works just as well with zip_equal (which is available in more-itertools, and really easy to write yourself around zip_longest, even if the other thread attempting to add it to the stdlib fails) or zip_longest as with zip. If you understand the grouper idiom, the question “how do I check for a leftover partial group” is just obviously the same question as it is with every other use of zip. So if I wanted to check for exact multiples of 4, I would just use the same code but with zip_equal instead of zip (or wrap that up in a chunkify_equal and use that instead of chunkify), and it would raise a ValueError if there were leftover extras instead of swallowing them.
So there’s no need for a new language feature here. An explicit test and assert that adds 2 lines of boilerplate to a 3-line function and obscured the main point of the function would be a worse solution than the one I can already write today.
Even if you think Python should be doing more to encourage such checks, your proposal doesn’t help that at all—what you want is something like Serhiy’s proposal in the other thread (to eventually rename zip to zip_shortest and either get rid of plain zip or make it an alias for zip_equal).
... why not? I know assert is discouraged by many, but I wouldn't say enabling ppl to do these checks doesn't help ppl do these checks...? unless I misunderstand what you mean by this?
Also, compare this other example for processing a different file format: with open(path) as f: for entry in split(f, '\n'): process(entry) It’s pretty obvious what the difference is here: one is reading entries that are groups of 4 lines; the other is reading entries that are groups of arbitrary numbers of lines but separated by blank lines. At most you might need to look at the help for chunkify and split to be absolutely sure they mean what you think they mean. (Although maybe I should have used functions from more-itertools rather than our own custom functions that do effectively the same thing but are kind of weird and probably not so well tested and whose names don’t come up in a web search.)
and... well I'm assuming this one just yields the extras at the end of the file/iterator?
No, because in this case it’s not even theoretically possible for there to be extras. It’s like asking what happens to the extra characters in str.split. By definition, there aren’t any—the last element is everything after the last separator, and there can never be anything left over after everything.
that's what I was asking :p a naive implementation would just collect things and yield on separator. if yours also yields the extras on StopIteration then it's fine.
On Apr 27, 2020, at 20:48, Soni L.
Here are four ways of doing this today:
…
So, why do we need another way to do something that’s probably pretty uncommon and can already be done pretty easily? Especially if that new way isn’t more readable or more powerful?
the only one with equivalent semantics is the last one.
I won’t argue about whether two functions that give the exact same results in every case but get there in different ways are “equivalent” or not, since one is already good enough. If you agree that there is obvious code that works in Python 3.8 (and even in Python 2.7, for that matter) to get the semantics you want, why should we add a new language feature that gives you a less readable, more verbose, and more complicated way to do the same thing?
tbh my particular case doesn't make a ton of practical sense.
That’s hardly a good argument for your proposal. Do you actually want the things you propose to be added to the language, or even to be seriously considered? If not, why are you proposing them?
see: why are we perfectly happy with ignoring extra lines at the end?
Because there aren’t any. The file was made by catting together 2022 4-line files, so it’s 8088 lines long. It will always be 8088 lines long. If I really thought that was important to check, surely I’d want to check 8088 rather than just divisible by 4. But I didn’t think it was worth checking either of those—or that the text is pure ASCII, or that the newlines are \n, etc. For a more general purpose script (especially if it had to accept input from potentially stupid or malicious end users and produce useful error responses instead of just punting), I would have checked many of those things and more, but for this script, it wasn’t worth it.
that's what assert is for - making assumptions that you know are correct now, but might not remain so in the future!
Would you want to read, or maintain, code like this: s = "spam" assert isinstance(s, str) assert isinstance(type(s), type) assert len(s) == 4 assert len(set(s)) == len(s) for c in s: assert type(c) == type(s) assert c is not None assert len(c) == 1 assert s.count(c) == 1 assert 0 <= ord(c) < 0x110000 assert len(c.encode()) <= 4 assert not sys.stdout.closed() print(f"{c}...") if sys.implementation.name == "cpython”: assert chr(ord(c)) is c assert c == s[-1] assert s == "spam" I’m assuming all of those things are true, and hundreds more (from the fact that s was unbound before the assignment to the fact that nobody has modified the interned 0 value to mean 1), but that doesn’t mean they’re all worth testing. Trying to test absolutely everything just means you’re more likely to forget to test one of the important things, and more likely to miss it if you do forget. (And that’s even assuming all of your tests are correct, which they almost certainly won’t be if you’re trying to test everything you can imagine. So you’ll also waste time debugging useless tests that could have been spent verifying, debugging or improving the useful tests and/or the actual functionality.) On top of that, if my input file doesn’t have 8088 lines, that’s almost certainly not a bug in my code, but either user error (I put the wrong file at that path) or corrupted data (I accidentally truncated the file). So testing it with an assert would actually be misleading myself; it should be something like a ValueError. Even if you never programmatically handle the error, having the right error makes a big difference to ease of debugging.
Even if you think Python should be doing more to encourage such checks, your proposal doesn’t help that at all—what you want is something like Serhiy’s proposal in the other thread (to eventually rename zip to zip_shortest and either get rid of plain zip or make it an alias for zip_equal).
... why not? I know assert is discouraged by many, but I wouldn't say enabling ppl to do these checks doesn't help ppl do these checks...? unless I misunderstand what you mean by this?
Because people already are enabled to check, and they’re just choosing not to. Giving them a harder and less discoverable way isn’t going to change that. Anyone who’s decided it’s not worth using zip_equal instead of zip is not going to think it’s worth adding an else and a test to the loop around that zip.
On 4/27/2020 11:47 PM, Soni L. wrote: [snip]
tbh my particular case doesn't make a ton of practical sense. I have config files and there may be errors opening or deserializing them, and I have a system to manage configs and overrides. which means you can have multiple config files, and you may wanna log errors. you can also use a config manager as a config file in another config manager, which is where the error logging gets a bit weirder. I'm currently returning lists of errors, but another option would be to yield the errors instead. but, I'm actually not sure what the best approach here is. so yeah. I can't *use* that motivating example, because if I did, everyone would dismiss me as crazy. (which I am, but please don't dismiss me based on that :/)
Maybe you could start by actually writing the code, and then demonstrate how your idea would make the code easier to read or understand, or less error prone, or whatever other improvement it would bring.
On 2020-04-28 7:50 a.m., Edwin Zimmerman wrote:
On 4/27/2020 11:47 PM, Soni L. wrote: [snip]
tbh my particular case doesn't make a ton of practical sense. I have config files and there may be errors opening or deserializing them, and I have a system to manage configs and overrides. which means you can have multiple config files, and you may wanna log errors. you can also use a config manager as a config file in another config manager, which is where the error logging gets a bit weirder. I'm currently returning lists of errors, but another option would be to yield the errors instead. but, I'm actually not sure what the best approach here is. so yeah. I can't *use* that motivating example, because if I did, everyone would dismiss me as crazy. (which I am, but please don't dismiss me based on that :/)
Maybe you could start by actually writing the code, and then demonstrate how your idea would make the code easier to read or understand, or less error prone, or whatever other improvement it would bring.
that thing about wrapping things from other generators and returning the number of generators I messed with? yeah. that's what I wanted to do. it doesn't do any of those things, it's just fun.
On 4/27/2020 11:47 PM, Soni L. wrote: [snip]
tbh my particular case doesn't make a ton of practical sense. I have config files and there may be errors opening or deserializing
On 2020-04-28 7:50 a.m., Edwin Zimmerman wrote: them, and I have a system to manage configs and overrides. which means you can have multiple config files, and you may wanna log errors. you can also use a config manager as a config file in another config manager, which is where the error logging gets a bit weirder. I'm currently returning lists of errors, but another option would be to yield the errors instead. but, I'm actually not sure what
On April 28, 2020 9:38 AM Soni L. wrote: the best
approach here is. so yeah. I can't *use* that motivating example, because if I did, everyone would dismiss me as crazy. (which I am, but please don't dismiss me based on that :/)
Maybe you could start by actually writing the code, and then demonstrate how your idea would make the code easier to read or understand, or less error prone, or whatever other improvement it would bring.
that thing about wrapping things from other generators and returning the number of generators I messed with?
yeah. that's what I wanted to do. it doesn't do any of those things, it's just fun.
I think I will stop replying to this thread, since we aren't getting anywhere. Either I'm a poor communicator or someone else is a poor listener. Either way, my attempts to constructively approach this thread are failing. --Edwin
On 2020-04-28 12:32 p.m., Edwin Zimmerman wrote:
On 4/27/2020 11:47 PM, Soni L. wrote: [snip]
tbh my particular case doesn't make a ton of practical sense. I have config files and there may be errors opening or deserializing
On 2020-04-28 7:50 a.m., Edwin Zimmerman wrote: them, and I have a system to manage configs and overrides. which means you can have multiple config files, and you may wanna log errors. you can also use a config manager as a config file in another config manager, which is where the error logging gets a bit weirder. I'm currently returning lists of errors, but another option would be to yield the errors instead. but, I'm actually not sure what
On April 28, 2020 9:38 AM Soni L. wrote: the best
approach here is. so yeah. I can't *use* that motivating example, because if I did, everyone would dismiss me as crazy. (which I am, but please don't dismiss me based on that :/)
Maybe you could start by actually writing the code, and then demonstrate how your idea would make the code easier to read or understand, or less error prone, or whatever other improvement it would bring.
that thing about wrapping things from other generators and returning the number of generators I messed with?
yeah. that's what I wanted to do. it doesn't do any of those things, it's just fun.
I think I will stop replying to this thread, since we aren't getting anywhere. Either I'm a poor communicator or someone else is a poor listener. Either way, my attempts to constructively approach this thread are failing. --Edwin
this code: def foo(self): for x in self.things: for y in x.foo(): yield Wrap(y) else with z: yield z return len(things) this is what it'd look like. and it'd be fun. the point of it is to be fun. it's a nuisance to actually use the results but if you're just printing them out (which is what I'd do) it's fine. if you don't want me telling you, maybe don't ask.
On Wed, Apr 29, 2020 at 1:50 AM Soni L.
On 2020-04-28 12:32 p.m., Edwin Zimmerman wrote:
On 4/27/2020 11:47 PM, Soni L. wrote: [snip]
tbh my particular case doesn't make a ton of practical sense. I have config files and there may be errors opening or deserializing
On 2020-04-28 7:50 a.m., Edwin Zimmerman wrote: them, and I have a system to manage configs and overrides. which means you can have multiple config files, and you may wanna log errors. you can also use a config manager as a config file in another config manager, which is where the error logging gets a bit weirder. I'm currently returning lists of errors, but another option would be to yield the errors instead. but, I'm actually not sure what
On April 28, 2020 9:38 AM Soni L. wrote: the best
approach here is. so yeah. I can't *use* that motivating example, because if I did, everyone would dismiss me as crazy. (which I am, but please don't dismiss me based on that :/)
Maybe you could start by actually writing the code, and then demonstrate how your idea would make the code easier to read or understand, or less error prone, or whatever other improvement it would bring.
that thing about wrapping things from other generators and returning the number of generators I messed with?
yeah. that's what I wanted to do. it doesn't do any of those things, it's just fun.
I think I will stop replying to this thread, since we aren't getting anywhere. Either I'm a poor communicator or someone else is a poor listener. Either way, my attempts to constructively approach this thread are failing. --Edwin
this code:
def foo(self): for x in self.things: for y in x.foo(): yield Wrap(y) else with z: yield z return len(things)
this is what it'd look like. and it'd be fun. the point of it is to be fun. it's a nuisance to actually use the results but if you're just printing them out (which is what I'd do) it's fine.
if you don't want me telling you, maybe don't ask.
I suggest forking CPython and implementing the feature. If the entire value of it is so you can write "fun" code, then there's no reason to have it in the core language. ChrisA
On Apr 28, 2020, at 09:18, Chris Angelico
I suggest forking CPython and implementing the feature.
I’d suggest trying MacroPy first. There’s no way to get the desired syntax with macros, but at least at first glance it seems like you should be able to get the desired semantics with something that’s only kind of ugly and clumsy, rather than totally hideous. And if so, that’s usually good enough for playing around with fun ideas to see where they can lead, and a lot less work. Plus, playing with MacroPy is actually fun in itself; playing with the CPython parser is kind of the opposite of fun. :)
participants (4)
-
Andrew Barnert
-
Chris Angelico
-
Edwin Zimmerman
-
Soni L.