<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 02.01.2018 16:36, Matthieu Brucher
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAHCaCk+M0gAz95XYQhRPTT0PgRqtcvhfLCs9jQmsfOyyAOMwFA@mail.gmail.com">
<div dir="ltr">Hi,
<div><br>
</div>
<div>Let's say that Numpy provides a GPU version on GPU. How
would that work with all the packages that expect the memory
to be allocated on CPU?</div>
<div>It's not that Numpy refuses a GPU implementation, it's that
it wouldn't solve the problem of GPU/CPU having different
memory. When/if nVidia decides (finally) that memory should be
also accessible from the CPU (like AMD APU), then this
argument is actually void.</div>
</div>
</blockquote>
<br>
I actually doubt that. Sure, having a unified memory is convenient
for the programmer. But as long as copying data between host and GPU
is orders of magnitude slower than copying data locally, performance
will suffer. Addressing this performance issue requires some
NUMA-like approach, moving the operation to where the data resides,
rather than treating all data locations equal.<br>
<br>
<div class="moz-signature">
<div class="moz-signature"><img moz-do-not-send="false"
src="cid:part1.2623F533.1BF95598@seefeld.name" alt="Stefan"
width="73" height="45"><br>
<pre>--
...ich hab' noch einen Koffer in Berlin...
</pre>
</div>
</div>
</body>
</html>