Hi,
I've got a project in which it turns out we need much higher precision
than even __float128 and playing around with a few alternatives mpfr
seems to be the highest performing possibility. So I've starting
writing a cythonized class mpfr_array which provides array-like
functionality but with mpfr_t as a "primitive" type.
While this is doable it feels very much like re-inventing the wheel as
I'm basically rewriting some part of numpy's functionality. So I was
wondering, as I think this is a very standard use case, is there any
interest in adding mpfr_t as a "primitive" dtype to numpy? I know one
can define new dtype's but I don't want it to by a python object since
then there will be a large python overhead for all operations (i.e. dot
products will happen in python, not C). mpfr_t is a very natural dtype
to add as its fast, C-based and supports general precision.
I have to admit complete ignorance to numpy's internals but as I'm
writing my own verison of such a class I would be happy to work with
anyone more versed in numpy than myself to extend numpy with built-in
mpfr_t support.
On a related note, if this was done, would it automatically work with
functionality such as numpy.linalg.inv(), etc...? In principle such
functions could have been written with macros to be more type-flexible
(i.e. an ADD(a,b,c) mapping to e.g. a=b+c for floats or to mpfr_add(a,
b, c) for mpfr_t) but I suspect this is not the case.
thanks,
Sheer