
On Thu, 2022-11-17 at 17:48 -0800, Stephan Hoyer wrote:
On Thu, Nov 17, 2022 at 5:29 PM Scott Ransom <sransom@nrao.edu> wrote:
<snip>
Hi Scott,
Thanks for sharing your feedback!
Would you or some of your colleagues be open to helping maintain a library that adds the 80-bit extended precision dtype into NumPy? This would be a variation of Ralf's "option A."
As you know, I am a hesitant, mainly because we don't clearly know how many users are affected. To explain the hesitation: * This change is "critical": For affected users their code just breaks without even a clear path to fix it. * The number of affected users is rather limited: But, at this time, we do not know that it is very very small :(. (Not counting users who could modify their code to use doubles) Now, given that, it falls into a category that to me makes the break big enough to be a tough choice. We don't normally break users in a critical way, and when we do I think it is either an important bug fix elsewhere or because we suspect there so few users that we could practically help everyone individually to find a solution. Fortunately, I do think either (A) or (C) would make the situation much simpler: (A) is great because it should give an alternative for many (maybe not all) users. I.e. for them the change has big impact, but hopefully not critical. Considering this discussion, we might need the longdouble DType and not just a quad precision one to actually achieve that. (C) Would effectively reduce the number of affected users to be exceedingly small (I think). So to me personally, given how tricky continuing support seems to be, either (A) or (C) seems sufficient to make it pretty clear cut (it is still a large compatibility change that should be formally accepted as a brief NEP, similar to yanking financial). Without any "mitigation" it is a tough decision to make. Maybe there is no decision, because longdouble is a house of cards bound to collapse unless dedicated maintainers speak up... Cheers, Sebastian PS: One problem we may have is API/ABI compatibility when we yank things out. I do think NumPy 2.0 is on the horizon, so that should help. ABI incompatibility should (IMO) be avoided at almost all cost, which means we may need a way to make sure if you compile for NumPy 2.0, but against an old NumPy version you still have some compatibility header in-place which ensures that e.g. `npy_longdouble` is not even defined (or its use at least carefully curated). So I do suspect and hope that we can pull of this (and other such changes) as an "API break", but in a way that allows to compile once and be ABI compatible to both old and new NumPy.
Best, Stephan
Scott NANOGrav Chair www.nanograv.org
-- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom@nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: A40A 94F2 3F48 4136 3AC4 9598 92D5 25CB 22A6 7B65 _______________________________________________ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-leave@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: shoyer@gmail.com
_______________________________________________ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-leave@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: sebastian@sipsolutions.net