Recommended way to utilize GPUs via OpenCL, ROCm
Is there an officially recommended way to utilize AMD GPUs via OpenCL, ROCm? I came across ROCm website https://rocm.github.io/. This has Tensorflow and PyTorch versions for using AMD GPUs. Just wanted to know if there is a way to use my AMD GPUs for NumPy calculations. -- Regards, Pankaj Jangid
Hello Pankaj,
There's ClPy for OpenCL: https://github.com/fixstars/clpy
Also this pull request for CuPy (merged, but as yet unreleased): https://github.com/cupy/cupy/pull/1094
Best regards,
Hameer Abbasi
On 18.10.19, 12:53, "NumPy-Discussion on behalf of Pankaj Jangid"
Hameer Abbasi
There's ClPy for OpenCL: https://github.com/fixstars/clpy Also this pull request for CuPy (merged, but as yet unreleased): https://github.com/cupy/cupy/pull/1094
This is great hope. Thanks for sharing this. I wonder why NVIDIA's approach is so widely accepted. Sometimes, I regret purchasing AMD GPUs. Not much support for them. -- Regards, Pankaj Jangid
I have also used PyOpenCL quite profitably: https://github.com/inducer/pyopencl https://github.com/inducer/pyopencl I philosophically prefer it to ROCm because it targets *all* GPUs, including intel integrated graphics on most laptops, which can actually get quite decent (30x) speedups.
On 19 Oct 2019, at 3:39 am, Pankaj Jangid
wrote: I wonder why NVIDIA's approach is so widely accepted. Sometimes, I regret purchasing AMD GPUs. Not much support for them.
I agree. I am very disappointed by the NVIDIA monopoly in scientific computing. Resist! Juan.
Juan Nunez-Iglesias
I have also used PyOpenCL quite profitably:
https://github.com/inducer/pyopencl https://github.com/inducer/pyopencl
I philosophically prefer it to ROCm because it targets *all* GPUs, including intel integrated graphics on most laptops, which can actually get quite decent (30x) speedups.
This is a good find. There is some work involved but it is good. It gives transparent access to underlying hardware. I wish NumPy operations automatically use the available resources. That is more concise. It will give scientific community an edge. I am not saying they are not good programmers but still it will let them focus on the main problem at hand. Let me explore it further. Thanks for sharing.
On 19 Oct 2019, at 3:39 am, Pankaj Jangid
wrote: I wonder why NVIDIA's approach is so widely accepted. Sometimes, I regret purchasing AMD GPUs. Not much support for them. I agree. I am very disappointed by the NVIDIA monopoly in scientific computing. Resist!
Really, very disappointing. :-( Regards, -- Pankaj Jangid
participants (3)
-
Hameer Abbasi
-
Juan Nunez-Iglesias
-
Pankaj Jangid