> So the Python version is about 13 times slower, and 10 million iterations
> (quite plausible) adds about 2 seconds.
Adds two seconds to *what* though? That's why I care more about
benchmarks than micro benchmarks. In real-world code, you are going to
be processing the data somehow. Adds two seconds to an hour's processing
time? I couldn't care less. Adds two seconds to a second? Now I'm
interested.
It seems to me that a Python implementation of zip_equals() shouldn't do the check in a loop like a version shows (I guess from more-itertools). More obvious is the following, and this has only a small constant speed penalty.
def zip_equal(*its):
yield from zip(*its)
if any(_sentinel == next(o, _sentinel) for o in its):
raise ZipLengthError
I still like zip_strict() better as a name, but whatever. And I don't care what the exception is, or what the sentinel is called.
... and yes, I realize you can quibble about exactly how many of the multiple iterators passed in might consume an extra element, which is possibly more with this code. But honestly, if your use case is "unequal lengths are an error" then you just simply do not care about that. *MY* use, to the contrary seems like it's more like Steven's. I.e. I do this fairly often: zip(a_few_things, lots_available). Not necessarily infinite, but where I expect "more than enough" of the longer iterator.
The dead increasingly dominate and strangle both the living and the
not-yet born. Vampiric capital and undead corporate persons abuse
the lives and control the thoughts of homo faber. Ideas, once born,
become abortifacients against new conceptions.