[Numpy-discussion] Direct GPU support on NumPy

Matthieu Brucher matthieu.brucher at gmail.com
Tue Jan 2 16:36:30 EST 2018


Hi,

Let's say that Numpy provides a GPU version on GPU. How would that work
with all the packages that expect the memory to be allocated on CPU?
It's not that Numpy refuses a GPU implementation, it's that it wouldn't
solve the problem of GPU/CPU having different memory. When/if nVidia
decides (finally) that memory should be also accessible from the CPU (like
AMD APU), then this argument is actually void.

Matthieu

2018-01-02 22:21 GMT+01:00 Yasunori Endo <jo7ueb at gmail.com>:

> Hi all
>
> Numba looks so nice library to try.
> Thanks for the information.
>
> This suggests a new, higher-level data model which supports replicating
>> data into different memory spaces (e.g. host and GPU). Then users (or some
>> higher layer in the software stack) can dispatch operations to suitable
>> implementations to minimize data movement.
>>
>> Given NumPy's current raw-pointer C API this seems difficult to
>> implement, though, as it is very hard to track memory aliases.
>>
> I understood modifying numpy.ndarray for GPU is technically difficult.
>
> So my next primitive question is why NumPy doesn't offer
> ndarray like interface (e.g. numpy.gpuarray)?
> I wonder why everybody making *separate* library, making user confused.
> Is there any policy that NumPy refuse standard GPU implementation?
>
> Thanks.
>
>
> --
> Yasunori Endo
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
>
>


-- 
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20180102/6c3ab2c6/attachment.html>


More information about the NumPy-Discussion mailing list