[Numpy-discussion] Direct GPU support on NumPy

Stefan Seefeld stefan at seefeld.name
Tue Jan 2 17:04:14 EST 2018

On 02.01.2018 16:36, Matthieu Brucher wrote:
> Hi,
> Let's say that Numpy provides a GPU version on GPU. How would that
> work with all the packages that expect the memory to be allocated on CPU?
> It's not that Numpy refuses a GPU implementation, it's that it
> wouldn't solve the problem of GPU/CPU having different memory. When/if
> nVidia decides (finally) that memory should be also accessible from
> the CPU (like AMD APU), then this argument is actually void.

I actually doubt that. Sure, having a unified memory is convenient for
the programmer. But as long as copying data between host and GPU is
orders of magnitude slower than copying data locally, performance will
suffer. Addressing this performance issue requires some NUMA-like
approach, moving the operation to where the data resides, rather than
treating all data locations equal.



      ...ich hab' noch einen Koffer in Berlin...

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20180102/13e4be05/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.png
Type: image/png
Size: 1478 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20180102/13e4be05/attachment-0001.png>

More information about the NumPy-Discussion mailing list