[Numpy-discussion] Direct GPU support on NumPy
matthieu.brucher at gmail.com
Tue Jan 2 16:36:30 EST 2018
Let's say that Numpy provides a GPU version on GPU. How would that work
with all the packages that expect the memory to be allocated on CPU?
It's not that Numpy refuses a GPU implementation, it's that it wouldn't
solve the problem of GPU/CPU having different memory. When/if nVidia
decides (finally) that memory should be also accessible from the CPU (like
AMD APU), then this argument is actually void.
2018-01-02 22:21 GMT+01:00 Yasunori Endo <jo7ueb at gmail.com>:
> Hi all
> Numba looks so nice library to try.
> Thanks for the information.
> This suggests a new, higher-level data model which supports replicating
>> data into different memory spaces (e.g. host and GPU). Then users (or some
>> higher layer in the software stack) can dispatch operations to suitable
>> implementations to minimize data movement.
>> Given NumPy's current raw-pointer C API this seems difficult to
>> implement, though, as it is very hard to track memory aliases.
> I understood modifying numpy.ndarray for GPU is technically difficult.
> So my next primitive question is why NumPy doesn't offer
> ndarray like interface (e.g. numpy.gpuarray)?
> I wonder why everybody making *separate* library, making user confused.
> Is there any policy that NumPy refuse standard GPU implementation?
> Yasunori Endo
> NumPy-Discussion mailing list
> NumPy-Discussion at python.org
Quantitative analyst, Ph.D.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion