Hi Scott - Thanks for providing input! Do you have a minimal example that shows the type of calculations you would like to be faster with extended precision? Just so we are clear on the goal for this use case.

Would you or some of your colleagues be open to helping maintain a library that adds the 80-bit extended precision dtype into NumPy? This would be a variation of Ralf's "option A."
The NumPy community has resources and assistance for onboarding new contributors, and I'm happy to provide additional assistance.


Ralf - Are there other use cases that you have already identified?

Best,
Mike



Mike McCarty
Software Engineering Manager
NVIDIA

From: Scott Ransom <sransom@nrao.edu>
Sent: Thursday, November 17, 2022 9:04 PM
To: numpy-discussion@python.org <numpy-discussion@python.org>
Subject: [Numpy-discussion] Re: status of long double support and what to do about it
 
External email: Use caution opening links or attachments


On 11/17/22 8:53 PM, Charles R Harris wrote:
>
>
> On Thu, Nov 17, 2022 at 6:30 PM Scott Ransom <sransom@nrao.edu <mailto:sransom@nrao.edu>> wrote:
>
>
>
>     On 11/17/22 7:13 PM, Charles R Harris wrote:
>      >
>      >
>      > On Thu, Nov 17, 2022 at 3:15 PM Ralf Gommers <ralf.gommers@gmail.com
>     <mailto:ralf.gommers@gmail.com>
>      > <mailto:ralf.gommers@gmail.com <mailto:ralf.gommers@gmail.com>>> wrote:
>      >
>      >     Hi all,
>      >
>      >     We have to do something about long double support. This is something I wanted to propose
>     a long
>      >     time ago already, and moving build systems has resurfaced the pain yet again.
>      >
>      >     This is not a full proposal yet, but the start of a discussion and gradual plan of attack.
>     <snip>
>      > I would agree that extended precision is pretty useless, IIRC, it was mostly intended as an
>     accurate
>      > way to produce double precision results. That idea was eventually dropped as not very useful.
>     I'd
>      > happily do away with subnormal doubles as well, they were another not very useful idea. And
>     strictly
>      > speaking, we should not support IBM double-double either, it is not in the IEEE standard.
>      >
>      > That said, I would like to have a quad precision type. That precision is useful for some
>     things, and
>      > I have a dream that someday it can be used for a time type. Unfortunately, last time I looked
>      > around, none of the available implementations had a NumPy compatible license.
>      >
>      > The tricky thing here is to not break downstream projects, but that may be unavoidable. I
>     suspect
>      > the fallout will not be that bad.
>      >
>      > Chuck
>
>     A quick response from one of the leaders of a team that requires 80bit extended precision for
>     astronomical work...
>
>     "extended precision is pretty useless" unless you need it. And the high-precision pulsar timing
>     community needs it. Standard double precision (64-bit) values do not contain enough precision
>     for us
>     to pass relative astronomical times via a single float without extended precision (the precision
>     ends up being at the ~1 microsec level over decades of time differences, and we need it at the
>     ~1-10ns level) nor can we store the measured spin frequencies (or do calculations on them) of our
>     millisecond pulsars with enough precision. Those spin frequencies can have 16-17 digits of base-10
>     precision (i.e. we measure them to that precision). This is why we use 80-bit floats (usually via
>     Linux, but also on non X1 Mac hardware if you use the correct compilers) extensively.
>
>     Numpy is a key component of the PINT software to do high-precision pulsar timing, and we use it
>     partly *because* it has long double support (with 80-bit extended precision):
>     https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnanograv%2FPINT&amp;data=05%7C01%7Cmmccarty%40nvidia.com%7C6bc649fe7b1047fe14d108dac909c126%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638043340985084225%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=VgjqmdHsQmp2K%2Bqil47D2JD9h5zWIHbUJD%2Fle8MqxgM%3D&amp;reserved=0 <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnanograv%2FPINT&amp;data=05%7C01%7Cmmccarty%40nvidia.com%7C6bc649fe7b1047fe14d108dac909c126%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638043340985084225%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=VgjqmdHsQmp2K%2Bqil47D2JD9h5zWIHbUJD%2Fle8MqxgM%3D&amp;reserved=0>
>     And see the published paper here, particularly Sec 3.3.1 and footnote #42:
>     https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fui.adsabs.harvard.edu%2Fabs%2F2021ApJ...911...45L%2Fabstract&amp;data=05%7C01%7Cmmccarty%40nvidia.com%7C6bc649fe7b1047fe14d108dac909c126%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638043340985084225%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=Nzq6djAHcLXEt0%2FDLCvtG0rjpTpK2%2FcmzVpSScCiw1k%3D&amp;reserved=0
>     <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fui.adsabs.harvard.edu%2Fabs%2F2021ApJ...911...45L%2Fabstract&amp;data=05%7C01%7Cmmccarty%40nvidia.com%7C6bc649fe7b1047fe14d108dac909c126%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638043340985084225%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=Nzq6djAHcLXEt0%2FDLCvtG0rjpTpK2%2FcmzVpSScCiw1k%3D&amp;reserved=0>
>
>     Going to software quad precision would certainly work, but it would definitely make things much
>     slower for our matrix and vector math.
>
>     We would definitely love to see a solution for this that allows us to get the extra precision we
>     need on other platforms besides Intel/AMD64+Linux (primarily), but giving up extended precision on
>     those platforms would *definitely* hurt. I can tell you that the pulsar community would definitely
>     be against option "B". And I suspect that there are other users out there as well.
>
>     Scott
>     NANOGrav Chair
>     https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.nanograv.org%2F&amp;data=05%7C01%7Cmmccarty%40nvidia.com%7C6bc649fe7b1047fe14d108dac909c126%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638043340985084225%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=6%2BW87KQ9YifhT8KavDj%2Bi1Rtsn3MuaaKHeEqD%2F%2Bxp2o%3D&amp;reserved=0 <https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.nanograv.org%2F&amp;data=05%7C01%7Cmmccarty%40nvidia.com%7C6bc649fe7b1047fe14d108dac909c126%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638043340985084225%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=6%2BW87KQ9YifhT8KavDj%2Bi1Rtsn3MuaaKHeEqD%2F%2Bxp2o%3D&amp;reserved=0>
>
>
>
> Pulsar timing is one reason I wanted a quad precision time type. I thought Astropy was using a self
> implemented double-double type to work around that?

That is correct. For non-compute-intensive time calculations, Astropy as a Time object that
internally uses two 64-bit floats. We use it, and it works great for high precision timekeeping over
astronomical times.

*However*, it ain't fast. So you can't do fast matrix/vector math on time differences where your
precision exceeds a single 64-bit float. That's exactly where we are with extended precision for our
pulsar timing work.

Scott

--
Scott M. Ransom            Address:  NRAO
Phone:  (434) 296-0320               520 Edgemont Rd.
email:  sransom@nrao.edu             Charlottesville, VA 22903 USA
GPG Fingerprint: A40A 94F2 3F48 4136 3AC4  9598 92D5 25CB 22A6 7B65
_______________________________________________
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-leave@python.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.python.org%2Fmailman3%2Flists%2Fnumpy-discussion.python.org%2F&amp;data=05%7C01%7Cmmccarty%40nvidia.com%7C6bc649fe7b1047fe14d108dac909c126%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638043340985084225%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=4%2FiBGxinVFx0dvfEr1vsN0pt8yW6mMKlCS6QYD2vdjY%3D&amp;reserved=0
Member address: mmccarty@nvidia.com