Compiling CUDA code
Hello, I would like to use distutils to compile CUDA code. CUDA is an extension of C and C++ to use GPUs capabilities. It need replace the gcc compiler by nvcc command line in unix, and append/modify some others parameters. I use the following command to generate my VolumeCUDA module before link it to a shared object. /usr/local/share/cuda/bin/nvcc -Xcompiler -fno-strict-aliasing,-fPIC -o VolumeCUDA.o -c src/VolumeCUDA.cu -I. -I/usr/local/share/cuda/inc -I/usr/include/python2.5 -O3 Exists any easy way to do that? Thanks, Cristian. -- Lic. Cristian S. Rocha [http://www.dc.uba.ar/~crocha] + | Grupo de Modelado Molecular (http://gmm.qi.fcen.uba.ar/) | Departamento de Computación | Departamento de Química Inorgánica y Física Química | Facultad de Ciencias Exactas y Naturales, | Universidad de Buenos Aires. + | Pabellón I - Ciudad Universitaria | (C1428EGA) - Buenos Aires - Argentina | Tel./Fax:+54-11-4576-3359 - | Conmutador: +54-11-4576-3390 al 96, int. 701/702 +
Hello, Cristian S. Rocha wrote:
I would like to use distutils to compile CUDA code. CUDA is an extension of C and C++ to use GPUs capabilities. It need replace the gcc compiler by nvcc command line in unix, and append/modify some others parameters.
I haven't tried this, but see psycopg2's setup.py: class psycopg_build_ext(distutils.command.build_ext.build_ext): ... def get_compiler(self): return self.compiler or get_default_compiler() Maybe you can use distutils.ccompiler.new_compiler() or subclass distutils.unixccompiler.UnixCCompiler. Then, return that instance from psycopg_build_ext.get_compiler() and use this in setup.py: setup( ... cmdclass=dict(build_ext=psycopg_build_ext), ... ) Again, I haven't tried this; just some pointers I stumbled upon. Please tell me if it served your purposes! http://docs.python.org/dist/dist.html http://docs.python.org/dist/module-distutils.ccompiler.html G'luck, -- Luis Bruno Python charmer, trapped inside a Windows sysadmin body. Help!
participants (2)
-
Cristian S. Rocha
-
Luís Bruno