Plans for new sparse compilation backend for PyData/Sparse
Hello everyone, The stated goal for sparse is to provide a NumPy-like API with a sparse representation of arrays. To this end, Quansight and I have been collaborating with researchers at MIT CSAIL <https://www.csail.mit.edu/> - in particular Prof. Amarasinge's group <https://www.csail.mit.edu/research/commit-group> and the TACO <https://github.com/tensor-compiler/taco/> team - to develop a performant and production-ready package for N-dimensional sparse arrays. There were several attempts made to explore this over the last couple of years, including a LLVM back-end <https://github.com/Quansight-Labs/taco/pulls?q=is%3Apr+llvm> for TACO <https://github.com/tensor-compiler/taco/>, and a pure-C++ template-metaprogramming approach called XSparse <https://github.com/hameerabbasi/xsparse>. To this end, we, at Quansight, are happy to announce that we have received funding from DARPA, together with our partners from MIT, under their Small Business Innovation Research (SBIR) program <https://www.darpa.mil/work-with-us/for-small-businesses/HR0011SB20234-06> to build out sparse using state-of-the-art just-in-time compilation strategies to boost performance for users. Additionally, as an interface, we'll adopt the Array API standard <https://data-apis.org/array-api/latest/> which was championed by major libraries like NumPy, PyTorch and CuPy. More details about the plan are posted on GitHub <https://github.com/pydata/sparse/discussions/618> — please join in the discussion there, to keep it all in one place. Best Regards, Hameer Abbasi
participants (1)
-
Hameer Abbasi