On 5/31/07, Martin Ünsal <martinunsal@gmail.com> wrote:
I was wondering if anyone has thought about accelerating NumPy with a GPU. For example nVidia's CUDA SDK provides a feasible way to offload vector math onto the very fast SIMD processors available on the GPU. Currently GPUs primarily support single precision floats and are not IEEE compliant, but still could be useful for some applications.
I've thought about it, but I think it would be a heck of a lot of work. NumPy works with subarrays a lot and I suspect this would make it tricky to stream through a GPU. Making good use of the several pipelines would also require a certain degree of parallelism which is not there now. We would also need computation of sin, cos, and other functions for ufuncs, so that might not work well. For ordinary matrix/array arithmetic the shortest route might be a version of ATLAS/BLAS, some of LAPACK, and maybe an FFT library written to use a GPU. Chuck