[Neuroimaging] parallel computation of bundle_distances_mam/mdf ?
olivetti at fbk.eu
Wed Dec 14 06:28:59 EST 2016
Thank you for pointing me to the MDF example. From what I see the Cython
syntax is not complex, which is good.
My only concern is the availability of OpenMP in the systems where DiPy is
used. On a reasonably recent GNU/Linux machine it seems straightforward to
have libgomp and the proper version of gcc. On other systems - say OSX -
the situation is less clear to me. According to what I read here
the OSX installation steps are not meant for standard end users. Are those
As a test of that, we've just tried to skip the steps described above and
instead to install gcc with conda on OSX ("conda install gcc"). In the
process, conda installed the recent gcc-4.8 with libgomp, which seems good
news. Unfortunately, when we tried to compile a simple example of Cython
code using parallelization (see below), the process failed (fatal error:
limits.h : no such file or directory)...
For the reasons above, I am wondering whether the very simple solution of
using the "multiprocessing" module, available from the standard Python
library, may be an acceptable first step towards the more efficient
multithreading of Cython/libgomp. With "multiprocessing", there is no extra
dependency on libgomp, or recent gcc or else. Moreover, multiprocessing
does not require to have Cython code, because it works on plain Python too.
---- test.pyx ----
from cython import parallel
from libc.stdio cimport printf
cdef int thread_id = -1
with nogil, parallel.parallel(num_threads=10):
thread_id = parallel.threadid()
printf("Thread ID: %d\n", thread_id)
----- setup.py -----
from distutils.core import setup, Extension
from Cython.Build import cythonize
extensions = [Extension(
ext_modules = cythonize(extensions)
python setup.py build_ext --inplace
On Tue, Dec 13, 2016 at 11:17 PM, Eleftherios Garyfallidis <elef at indiana.edu
> Hi Emanuele,
> Here is an example of how we calculated the distance matrix in parallel
> (for the MDF) using OpenMP
> You can just add another function that does the same using mam. It should
> be really easy to implement as we have
> already done it for the MDF for speeding up SLR.
> Then we need to update the bundle_distances* functions to use the parallel
> I'll be happy to help you with this. Let's try to schedule some time to
> look at this together.
> Best regards,
> On Mon, Dec 12, 2016 at 11:16 AM Emanuele Olivetti <olivetti at fbk.eu>
>> I usually compute the distance matrix between two lists of streamlines
>> using bundle_distances_mam() or bundle_distances_mdf(). When the lists are
>> large, it is convenient and easy to exploit the multiple cores of the CPU
>> because such computation is intrinsically (embarassingly) parallel. At the
>> moment I'm doing it through the multiprocessing or the joblib modules,
>> because I cannot find a way directly from DiPy, at least according to what
>> I see in dipy/tracking/distances.pyx . But consider that I am not
>> proficient in cython.parallel.
>> Is there a preferable way to perform such parallel computation? I plan to
>> prepare a pull request in future and I'd like to be on the right track.
>> Neuroimaging mailing list
>> Neuroimaging at python.org
> Neuroimaging mailing list
> Neuroimaging at python.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Neuroimaging