On Wed, Oct 18, 2017 at 10:21 PM, Koos Zevenhoven <k7hoven@gmail.com> wrote:
On Wed, Oct 18, 2017 at 10:13 PM, Serhiy Storchaka <storchaka@gmail.com> wrote:
18.10.17 17:48, Nick Coghlan пише:
1. It will make those loops slower, due to the extra overhead of checking for signals (even the opcode eval loop includes all sorts of tricks to avoid actually checking for new signals, since doing so is relatively slow)
2. It will make those loops harder to maintain, since the high cost of checking for signals means the existing flat loops will need to be replaced with nested ones to reduce the per-iteration cost of the more expensive checks
3. It means making the signal checking even harder to reason about than it already is, since even C implemented methods that avoid invoking arbitrary Python code could now still end up checking for signals

I have implemented signals checking for itertools iterators. [1] The overhead is insignificant because signals are checked only for every 0x10000-th item (100-4000 times/sec). The consuming loops are not changed because signals are checked on the producer's side.

[1] https://bugs.python.org/issue31815


​Nice! Though I'd really like a general ​solution that other code can easily adopt, even third-party extension libraries.


​By the way, now that I actually read the BPO issue​, it looks like the benchmarks were for 0x1000 (15 bits)? And why is everyone doing powers of two anyway?

Anyway, I still don't think infinite iterables are the most common case where this problem occurs. Solving this in the most common consuming loops would allow breaking out of a lot of long loops regardless of which iterable type (if any) is being used. So I'm still asking which one should solve the problem.

​-- Koos​


--
+ Koos Zevenhoven + http://twitter.com/k7hoven +