On May 15, 2020, at 21:35, Steven D'Aprano email@example.com wrote:
On Fri, May 15, 2020 at 05:44:59PM -0700, Andrew Barnert wrote:
Once you go with a separate view-creating function (or type), do we even need the dunder?
Possibly not. But the beauty of a protocol is that it can work even if the object doesn't define a `__view__` dunder.
Sure, but if there’s no good reason for any class to provide a __view__ dunder, it’s better not to call one.
Which is why I asked—in the message you’re replying to—a bunch of questions to try to determine whether there’s any reason for a class to want to provide an override. I’m not going to repeat the whole thing here; it’s all still in that same message you replied to.
- If the object defines `__view__`, call it; this allows objects to
return an optimized view, if it makes sense to them; e.g. bytes might simply return a memoryview.
Not if memoryview doesn’t have the right API, as we discussed earlier in this thread.
But more importantly, if it’s only builtins that will likely ever need an optimization, we can do that inside the functions. That’s exactly what we do in hundreds of places already. Even the one optimization that’s exposed as part of the public C API, PySequence_Fast, isn’t hookable, much less all the functions that fast-path directly on the array in list/tuple or on the split hash table in set/dict/dict_keys and so on. It seems to work well enough in practice, and it’s simpler, and faster for the builtins, and it means we don’t have hundreds of extra dunders (and type slots in CPython) that will almost never be used, and PyPy doesn’t need to write hooks that are actually pessimizations just because they’re optimizations in CPython, and so on.
Of course there might be a reason that doesn’t apply in this case (there obviously is a good reason for non-builtin types to optimize __contains__, for example), but “there might be” isn’t an answer to YAGNI. Especially if we can add the dunder later if someone later finds a need for it.
And honestly, I’m not sure even list and tuple are worth optimizing here. After all, you can’t do the index arithmetic and call to sq_ifem significantly faster than a generic C function; it only helps if you can avoid the call to sq_item, and I think we can’t do that in any of the most useful cases (at least not without patching up a whole lot more code than we want). But I’ll try it and see if I’m wrong.