Not a question really but just for discussion/pie-in-the-sky etc.... This is a news item on vizworld about getting Matlab code to run on a CUDA enabled GPU. http://www.vizworld.com/2009/05/cuda-enable-matlab-with-gpumat/ If the use of GPU's for numerical tasks takes off (has it already?) then Id be interested to know the views of the numpy experts out there. Cheers Brennan
Brennan Williams wrote:
Not a question really but just for discussion/pie-in-the-sky etc....
This is a news item on vizworld about getting Matlab code to run on a CUDA enabled GPU.
http://www.vizworld.com/2009/05/cuda-enable-matlab-with-gpumat/
There is this which looks similar for numpy: http://kered.org/blog/2009-04-13/easy-python-numpy-cuda-cublas/ I have never used it, just saw it mentioned somewhere, cheers, David
Also note: nvidia is about to release the first implementation of an OpenCL runtime based on cuda. OpenCL is an open standard such as OpenGL but for numerical computing on stream platforms (GPUs, Cell BE, Larrabee, ...). -- Olivier On May 26, 2009 8:54 AM, "David Cournapeau" <david@ar.media.kyoto-u.ac.jp> wrote: Brennan Williams wrote: > Not a question really but just for discussion/pie-in-the-sky etc.... > > T... There is this which looks similar for numpy: http://kered.org/blog/2009-04-13/easy-python-numpy-cuda-cublas/ I have never used it, just saw it mentioned somewhere, cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy...
On Tue, May 26, 2009 at 07:43:02AM -0400, Neal Becker wrote:
Olivier Grisel wrote:
Also note: nvidia is about to release the first implementation of an OpenCL runtime based on cuda. OpenCL is an open standard such as OpenGL but for numerical computing on stream platforms (GPUs, Cell BE, Larrabee, ...).
You might be interested in pycuda.
I am sure Olivier knows about pycuda :). However, the big deal with OpenCL, compared to CUDA, is that it is an open standard. With CUDA, you are bound to nvidia's future policies. Gaël
2009/5/26 Gael Varoquaux <gael.varoquaux@normalesup.org>:
On Tue, May 26, 2009 at 07:43:02AM -0400, Neal Becker wrote:
Olivier Grisel wrote:
Also note: nvidia is about to release the first implementation of an OpenCL runtime based on cuda. OpenCL is an open standard such as OpenGL but for numerical computing on stream platforms (GPUs, Cell BE, Larrabee, ...).
You might be interested in pycuda.
I am sure Olivier knows about pycuda :). However, the big deal with OpenCL, compared to CUDA, is that it is an open standard. With CUDA, you are bound to nvidia's future policies.
Gaël
The issue with OpenCL is that there will be some extensions for each supported architecture, which means that the generic OpenCL will never be very fast or more exactly near the optimum. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher
On Tuesday 26 May 2009 14:08:32 Matthieu Brucher wrote:
2009/5/26 Gael Varoquaux <gael.varoquaux@normalesup.org>:
On Tue, May 26, 2009 at 07:43:02AM -0400, Neal Becker wrote:
Olivier Grisel wrote:
Also note: nvidia is about to release the first implementation of an OpenCL runtime based on cuda. OpenCL is an open standard such as OpenGL but for numerical computing on stream platforms (GPUs, Cell BE, Larrabee, ...).
You might be interested in pycuda.
I am sure Olivier knows about pycuda :). However, the big deal with OpenCL, compared to CUDA, is that it is an open standard. With CUDA, you are bound to nvidia's future policies.
Gaël
The issue with OpenCL is that there will be some extensions for each supported architecture, which means that the generic OpenCL will never be very fast or more exactly near the optimum.
what's the difference w/ OpenGL ? i.e. isn't the job of the "underlying" library to provide the best algorithm- freakingly-optimized-bare-to-the-metal-whatever-opcode, hidden away from the user's face ? OpenCL is just an API (modeled after the CUDA one AFAICT) so implementers can use whatever trick they want, right ? my 2 euro-cents. cheers, sebastien. -- ######################################### # Dr. Sebastien Binet # Laboratoire de l'Accelerateur Lineaire # Universite Paris-Sud XI # Batiment 200 # 91898 Orsay #########################################
The issue with OpenCL is that there will be some extensions for each supported architecture, which means that the generic OpenCL will never be very fast or more exactly near the optimum.
what's the difference w/ OpenGL ? i.e. isn't the job of the "underlying" library to provide the best algorithm- freakingly-optimized-bare-to-the-metal-whatever-opcode, hidden away from the user's face ?
It's like OpenGL: you have to fall back to more simple functions if you want to support every platform. If you target only one specific platform, you can use custom optimized functions.
OpenCL is just an API (modeled after the CUDA one AFAICT) so implementers can use whatever trick they want, right ?
Implementers can't know for instance how the data-domain must be split (1D, 2D, 3D, ... ? what if the underlying tool doesn't provide all of them?). OpenCL will have ways to tell that some data must be stored in the local or shared memory (for the GPU), ... There are some companies that provide ways to do this with pragmas ion C and Fortran (i.e. CAPS), but even if there are pragmas dedicated to CUDA, the generated code is not optimal. So I don't think it is reasonable to expect the implementers to provide in the common API the tools to make a really optimal code. You will have to use additional, manufacturer-related API, like what you do for state-of-the-art OpenGL.
my 2 euro-cents.
my 2 euro-cents ;) Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher
participants (7)
-
Brennan Williams
-
David Cournapeau
-
Gael Varoquaux
-
Matthieu Brucher
-
Neal Becker
-
Olivier Grisel
-
Sebastien Binet