Parallelization of Python on GPU?

Jason Swails jason.swails at gmail.com
Thu Feb 26 10:06:06 EST 2015


On Thu, 2015-02-26 at 14:02 +1100, Steven D'Aprano wrote:
> John Ladasky wrote:
> 
> 
> > What I would REALLY like to do is to take advantage of my GPU.
> 
> I can't help you with that, but I would like to point out that GPUs 
> typically don't support IEE-754 maths, which means that while they are 
> likely significantly faster, they're also likely significantly less 
> accurate. Any any two different brands/models of GPU are likely to give 
> different results. (Possibly not *very* different, but considering the mess 
> that floating point maths was prior to IEEE-754, possibly *very* different.)

This hasn't been true in NVidia GPUs manufactured since ca. 2008.

> Personally, I wouldn't trust GPU floating point for serious work. Maybe for 
> quick and dirty exploration of the data, but I'd then want to repeat any 
> calculations using the main CPU before using the numbers anywhere :-)

There is a *huge* dash toward GPU computing in the scientific computing
sector.  Since I started as a graduate student in computational
chemistry/physics in 2008, I watched as state-of-the-art supercomputers
running tens of thousands to hundreds of thousands of cores were
overtaken in performance by a $500 GPU (today the GTX 780 or 980) you
can put in a desktop.  I went from running all of my calculations on a
CPU cluster in 2009 to running 90% of my calculations on a GPU by the
time I graduated in 2013... and for people without as ready access to
supercomputers as myself the move was even more pronounced.

This work is very serious, and numerical precision is typically of
immense importance.  See, e.g.,
http://www.sciencedirect.com/science/article/pii/S0010465512003098 and
http://pubs.acs.org/doi/abs/10.1021/ct400314y

In our software, we can run simulations on a GPU or a CPU and the
results are *literally* indistinguishable.  The transition to GPUs was
accompanied by a series of studies that investigated precisely your
concerns... we would never have started using GPUs if we didn't trust
GPU numbers as much as we did from the CPU.

And NVidia is embracing this revolution (obviously) -- they are putting
a lot of time, effort, and money into ensuring the success of GPU high
performance computing.  It is here to stay in the immediate future, and
refusing to use the technology will leave those that *could* benefit
from it at a severe disadvantage. (That said, GPUs aren't good at
everything, and CPUs are also here to stay.)

And GPU performance gains are outpacing CPU performance gains -- I've
seen about two orders of magnitude improvement in computational
throughput over the past 6 years through the introduction of GPU
computing and improvements in GPU hardware.

All the best,
Jason

-- 
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher




More information about the Python-list mailing list