On Thu, 12 Mar 2020 at 21:22, Chris Angelico email@example.com wrote:
They actually ARE already discarded
0____O You're right. So *how* can juliantaylor said he measured a speedup of 2x for large ndarrays? He added also benchmarks, that are still in numpy code. Furthermore he stated that what he done it's done by Python also for strings. Maybe it's an old "feature"? Since it seems this happens for _every_ object.
I tried to install numpy version 1.14.0rc1, the latest version before the patch, but it's not compatible with python 3.8, since PyThreadState was changed in Python 3.7 (hey, what about "no more breaking changes?" :-P)