From jdgispert at fpmaragall.org Fri Oct 7 10:29:55 2016 From: jdgispert at fpmaragall.org (=?UTF-8?Q?Juan_Domingo_Gispert_L=C3=B3pez?=) Date: Fri, 7 Oct 2016 16:29:55 +0200 Subject: [Neuroimaging] Research Position in Genetics and Neuroimaging Message-ID: The Barcelona?eta Brain Research Centre (BBRC), research institution of the Pasqual Maragall Foundation (PMF) and the Centre for Genomic Regulation (CRG) invites applications for a full-time research position to study the genetic modulation of cerebral phenotypes based on genome-wide associations with neuroimaging data, as part of its clinical research program. The primary responsibility of this position includes the analysis and interpretation of genome-wide association studies (GWAS) data in association with quantitative phenotypes extracted from neuroimaging modalities. The candidate will be working under the guidance of the Principal Investigators Roderic Guig? and Stephan Ossowski of the CRG and the Neuroimaging Group Principal Investigator of the BBRC, Juan-Domingo Gispert. The aim of this project is to improve our understanding of genetic modulation of cerebral phenotypes in healthy subjects and to identify spatially varying patterns of genetic control that may not be evident from summary variables. For additional details please visit: https://fpmaragall.org/wp-content/uploads/2014/07/Postdoc_ CRG_BBRC_Oct2016.pdf Deadline: Please submit your application by November 18th, 2016. -- *Juan D Gispert* Head Neuroimaging Research *Barcelona**?**eta **Brain Research Center - Fundaci? Pasqual Maragall* T. (+34) 93 326 31 90 C/ Wellington, 30 08005 Barcelona *www.fpmaragall.org * -------------- next part -------------- An HTML attachment was scrubbed... URL: From abdulghzain at gmail.com Wed Oct 12 15:19:06 2016 From: abdulghzain at gmail.com (Abdul Ghaffar Zain) Date: Wed, 12 Oct 2016 15:19:06 -0400 Subject: [Neuroimaging] GSOC 2017 Message-ID: Hello, I was looking at some projects I could get familiar with for Google Summer of Code 2017, and I came across DIPY. I found the ideas that DIPY is working on quite interesting, and I like the idea of making heath care both better and more affordable. Contributing to such a project would make me feel really good! I was wondering if Dipy will be participating for GSOC 2017, as working on DIPY for GSOC 2017 would be awesome for me! I know that Google hasn't announced the projects for 2017, but does Dipy plan on applying through the PSF? Can't wait to get started on this project, -Abdul Z -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Wed Oct 12 16:38:43 2016 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 12 Oct 2016 13:38:43 -0700 Subject: [Neuroimaging] GSOC 2017 In-Reply-To: References: Message-ID: Hi Abdul, On Wed, Oct 12, 2016 at 12:19 PM, Abdul Ghaffar Zain wrote: > Hello, > I was looking at some projects I could get familiar with for Google Summer > of Code 2017, and I came across DIPY. I found the ideas that DIPY is > working on quite interesting, and I like the idea of making heath care both > better and more affordable. Contributing to such a project would make me > feel really good! > I was wondering if Dipy will be participating for GSOC 2017, as working on > DIPY for GSOC 2017 would be awesome for me! I know that Google hasn't > announced the projects for 2017, but does Dipy plan on applying through the > PSF? > Thanks for your interest. It's a bit early for this, and we don't yet know whether we will participate in GSoC in 2017, and what projects we will have then. But we will announce them here, in the fullness of time, and please do check in at our wiki pages (https://github.com/nipy/dipy/wiki), where we should have information as next summer approaches. Cheers, Ariel > Can't wait to get started on this project, > -Abdul Z > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.moulton.jr at gmail.com Wed Oct 19 03:55:08 2016 From: eric.moulton.jr at gmail.com (Eric Moulton) Date: Wed, 19 Oct 2016 09:55:08 +0200 Subject: [Neuroimaging] [Dipy] discrepancies of SH coeffs with MRtrix Message-ID: Hello Dipy contributors, I have been playing with Dipy recently in hopes that I can use it as the main tool to process my data. In particular, I have been interested in using MRtrix's mrregister which is linear/non-linear registration technique based on the SH coefficients ( http://www.sciencedirect.com/science/article/pii/S1053811911001534#bb0285). The papers results have reported that the best registration is obtained with lmax = 4, so I have calculated the SH coefficient volumes with Dipy and MRtrix. In Dipy, I did this with peaks_from_model function and making sh_basis_type='mrtrix' and then using the shm_coeff attribute. In MRtrix, I used the dwi2fod function. I can develop on this more if you require more details. In short, both gave me 4D volumes with 15 volumes in the 4th dimension as expected for lmax = 4. The first thing I noticed when I compared the two outputs was that the Y(0,0) volume in Dipy was a single value brain volume where every voxel coefficient was equal to 0.2821 - which is literally the equation for Y(0,0) = 1/sqrt(4*pi) - whereas the Y(0,0) volume had different coefficients for each voxel. I contacted them about this, and they did a really good job explaining it to me ( http://community.mrtrix.org/t/first-spherical-harmonic-coefficient-y-0-0-meaning/507/1). In short, they said that Dipy normalizes to the unit integral whereas MRtrix doesn't. As I said, my main goal is to use use mrregister and in particular with the IIT HARDI template (https://www.nitrc.org/frs/?group_id=432) which has the same MRtrix SH coefficient system (i.e. Y(0,0) is different for every voxel). But to do that with Dipy, I imagine that I would need to have similar SH coeff outputs for my data. I have tried playing with the Dipy code to obtain similar results as with MRtrix (taking out the normalization term in spherical_harmonics(m,n,theta,phi) in dipy/reconst/shm.py, etc.) but I still keep getting a single valued brain for the Y(0,0) term that equals 1/sqrt(4*pi). I was wondering if someone know where this difference in behavior is coming from and where I could look to try to get MRtrix-type outputs for the SH coeffs. Thank you for your time, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From demian.wassermann at inria.fr Wed Oct 19 04:28:45 2016 From: demian.wassermann at inria.fr (Demian Wassermann) Date: Wed, 19 Oct 2016 11:28:45 +0300 Subject: [Neuroimaging] PhD position in Big Data Approaches to Clinical Diffusion MRI Analysis (Inria / Harvard joint grant) Message-ID: <4DA20A92-802E-4393-822F-80B3705E81E8@inria.fr> A full-time 3-year doctoral position is available at under the supervision of Demian Wassermann. The position is a funded grant for joint research between the Athena team of Inria Sophia Antipolis, France and the Psychiatry Neuroimaging Lab of Harvard Medical School, Boston, USA The goal of the project is to develop a new generation of statistical tools for the analysis of brain tissue across research and clinical hospitals. These techniques will be applied to shed new light on the characteristics of traumatic brain injury (TBI). The amount and variability of data is considerable and Big Data approaches will be required for this enterprise. More information in http://www-sop.inria.fr/members/Demian.Wassermann/phd-position-in-big-data-approaches-to-clinical-diffusion-mri-analysis.html For any inquiries please don?t hesitate in contacting me All the best! Demian -- Demian Wassermann, PhD demian.wassermann at inria.fr Associate Research Professor (CR1) Athena Project Team INRIA Sophia Antipolis - M?diterran?e 2004 route des lucioles - FR-06902 From arokem at gmail.com Wed Oct 19 12:27:56 2016 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 19 Oct 2016 09:27:56 -0700 Subject: [Neuroimaging] [Dipy] discrepancies of SH coeffs with MRtrix In-Reply-To: References: Message-ID: Hi Eric, Thank you for your message, and for taking the time to figure these things out together. On Wed, Oct 19, 2016 at 12:55 AM, Eric Moulton wrote: > Hello Dipy contributors, > > I have been playing with Dipy recently in hopes that I can use it as the > main tool to process my data. In particular, I have been interested in > using MRtrix's mrregister which is linear/non-linear registration technique > based on the SH coefficients (http://www.sciencedirect.com/ > science/article/pii/S1053811911001534#bb0285). > > The papers results have reported that the best registration is obtained > with lmax = 4, so I have calculated the SH coefficient volumes with Dipy > and MRtrix. In Dipy, I did this with peaks_from_model function and making > sh_basis_type='mrtrix' and then using the shm_coeff attribute. In MRtrix, I > used the dwi2fod function. I can develop on this more if you require more > details. In short, both gave me 4D volumes with 15 volumes in the 4th > dimension as expected for lmax = 4. > > The first thing I noticed when I compared the two outputs was that the > Y(0,0) volume in Dipy was a single value brain volume where every voxel > coefficient was equal to 0.2821 - which is literally the equation for > Y(0,0) = 1/sqrt(4*pi) - whereas the Y(0,0) volume had different > coefficients for each voxel. I contacted them about this, and they did a > really good job explaining it to me (http://community.mrtrix.org/t > /first-spherical-harmonic-coefficient-y-0-0-meaning/507/1). In short, > they said that Dipy normalizes to the unit integral whereas MRtrix doesn't. > I'm sorry -- I might be misunderstanding what you -- but I am unable to reproduce this. When I run this example: http://nipy.org/dipy/examples_built/reconst_csd.html setting the `sh_order` keyword argument of `ConstrainedSphericalDeconvModel` to 4 (line 170), and then plotting the first slice in the `sh_coeff` attribute of the fit object, corresponding to Y(0, 0), I get a spatially varying function, as you would expect. Maybe you could tell me what exactly you are doing? Are you by any chance looking at the basis set itself (`csd_model.B_dwi[..., 0]`)? That should be a constant for Y(0,0) -- it's a sphere, after all. And indeed, it's all equal to 1/sqrt(4pi). Cheers, Ariel As I said, my main goal is to use use mrregister and in particular with the > IIT HARDI template (https://www.nitrc.org/frs/?group_id=432) which has > the same MRtrix SH coefficient system (i.e. Y(0,0) is different for every > voxel). But to do that with Dipy, I imagine that I would need to have > similar SH coeff outputs for my data. I have tried playing with the Dipy > code to obtain similar results as with MRtrix (taking out the normalization > term in spherical_harmonics(m,n,theta,phi) in dipy/reconst/shm.py, etc.) > but I still keep getting a single valued brain for the Y(0,0) term that > equals 1/sqrt(4*pi). > > I was wondering if someone know where this difference in behavior is > coming from and where I could look to try to get MRtrix-type outputs for > the SH coeffs. > > Thank you for your time, > Eric > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stjeansam at gmail.com Wed Oct 19 12:48:48 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Wed, 19 Oct 2016 18:48:48 +0200 Subject: [Neuroimaging] [Dipy] discrepancies of SH coeffs with MRtrix In-Reply-To: References: Message-ID: <9fadabb3-ea96-f46f-6d36-543ef4d54140@gmail.com> You might be both right on that, csdeconv itself does not normalize but peaks from model does give that option, so that could be it depending on how you called the function. As a sidenote, the mrtrix basis is the mrtrix2 basis (it was written before mrtrix3) and they now use an orthonormal basis, so it is different from a small normalization factor, sqrt(2) if memory serve, but their wiki explains everything also [1]. Other small thing, the way the coefficients are ordered in fnav/dipy and mrtrix is different (both for the tensor and SH), so simply using one in the other will not directly work, but someone might have the required permutation figured out already since the conversion si offered in various software packages. [1] Seems like they changed the location of the doc, here it is https://mrtrix.readthedocs.io/en/latest/concepts/orthonormal_basis.html Le 2016-10-19 ? 18:27, Ariel Rokem a ?crit : > Hi Eric, > > Thank you for your message, and for taking the time to figure these > things out together. > > On Wed, Oct 19, 2016 at 12:55 AM, Eric Moulton > > wrote: > > Hello Dipy contributors, > > I have been playing with Dipy recently in hopes that I can use it > as the main tool to process my data. In particular, I have been > interested in using MRtrix's mrregister which is linear/non-linear > registration technique based on the SH coefficients > (http://www.sciencedirect.com/science/article/pii/S1053811911001534#bb0285 > ). > > The papers results have reported that the best registration is > obtained with lmax = 4, so I have calculated the SH coefficient > volumes with Dipy and MRtrix. In Dipy, I did this with > peaks_from_model function and making sh_basis_type='mrtrix' and > then using the shm_coeff attribute. In MRtrix, I used the dwi2fod > function. I can develop on this more if you require more details. > In short, both gave me 4D volumes with 15 volumes in the 4th > dimension as expected for lmax = 4. > > The first thing I noticed when I compared the two outputs was that > the Y(0,0) volume in Dipy was a single value brain volume where > every voxel coefficient was equal to 0.2821 - which is literally > the equation for Y(0,0) = 1/sqrt(4*pi) - whereas the Y(0,0) volume > had different coefficients for each voxel. I contacted them about > this, and they did a really good job explaining it to me > (http://community.mrtrix.org/t/first-spherical-harmonic-coefficient-y-0-0-meaning/507/1 > ). > In short, they said that Dipy normalizes to the unit integral > whereas MRtrix doesn't. > > > I'm sorry -- I might be misunderstanding what you -- but I am unable > to reproduce this. When I run this example: > > http://nipy.org/dipy/examples_built/reconst_csd.html > > > setting the `sh_order` keyword argument of > `ConstrainedSphericalDeconvModel` to 4 (line 170), and then plotting > the first slice in the `sh_coeff` attribute of the fit object, > corresponding to Y(0, 0), I get a spatially varying function, as you > would expect. > > Maybe you could tell me what exactly you are doing? Are you by any > chance looking at the basis set itself (`csd_model.B_dwi[..., 0]`)? > That should be a constant for Y(0,0) -- it's a sphere, after all. And > indeed, it's all equal to 1/sqrt(4pi). > > Cheers, > > Ariel > > As I said, my main goal is to use use mrregister and in particular > with the IIT HARDI template > (https://www.nitrc.org/frs/?group_id=432 > ) which has the same > MRtrix SH coefficient system (i.e. Y(0,0) is different for every > voxel). But to do that with Dipy, I imagine that I would need to > have similar SH coeff outputs for my data. I have tried playing > with the Dipy code to obtain similar results as with MRtrix > (taking out the normalization term in > spherical_harmonics(m,n,theta,phi) in dipy/reconst/shm.py, etc.) > but I still keep getting a single valued brain for the Y(0,0) term > that equals 1/sqrt(4*pi). > > I was wondering if someone know where this difference in behavior > is coming from and where I could look to try to get MRtrix-type > outputs for the SH coeffs. > > Thank you for your time, > Eric > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrbago at gmail.com Thu Oct 20 17:36:49 2016 From: mrbago at gmail.com (Bago) Date: Thu, 20 Oct 2016 21:36:49 +0000 Subject: [Neuroimaging] [Dipy] discrepancies of SH coeffs with MRtrix In-Reply-To: References: Message-ID: Hi Eric, Just to be clear, when you call peaks_from_model, which model are you using. I'm asking because I believe the CsaOdfModel, "constant solid angle" qball model [1], does produce normalized ODFs (Y(0, 0) = 1 / sqrt(4 * pi)). This model is conceptually a little different from the model implemented in MRtrix. Dipy also has a `ConstrainedSphericalDeconvModel`, dipy.reconst.csdeconv.ConstrainedSphericalDeconvModel. This model is going to be more similar to MRtrix. Hope that helps. Bago [1] Aganj Reconstruction of the orientation distribution function in single- and multiple-shell q-ball imaging within constant solid angle. On Wed, Oct 19, 2016 at 9:28 AM Ariel Rokem wrote: > Hi Eric, > > Thank you for your message, and for taking the time to figure these things > out together. > > On Wed, Oct 19, 2016 at 12:55 AM, Eric Moulton > wrote: > > Hello Dipy contributors, > > I have been playing with Dipy recently in hopes that I can use it as the > main tool to process my data. In particular, I have been interested in > using MRtrix's mrregister which is linear/non-linear registration technique > based on the SH coefficients ( > http://www.sciencedirect.com/science/article/pii/S1053811911001534#bb0285 > ). > > The papers results have reported that the best registration is obtained > with lmax = 4, so I have calculated the SH coefficient volumes with Dipy > and MRtrix. In Dipy, I did this with peaks_from_model function and making > sh_basis_type='mrtrix' and then using the shm_coeff attribute. In MRtrix, I > used the dwi2fod function. I can develop on this more if you require more > details. In short, both gave me 4D volumes with 15 volumes in the 4th > dimension as expected for lmax = 4. > > The first thing I noticed when I compared the two outputs was that the > Y(0,0) volume in Dipy was a single value brain volume where every voxel > coefficient was equal to 0.2821 - which is literally the equation for > Y(0,0) = 1/sqrt(4*pi) - whereas the Y(0,0) volume had different > coefficients for each voxel. I contacted them about this, and they did a > really good job explaining it to me ( > http://community.mrtrix.org/t/first-spherical-harmonic-coefficient-y-0-0-meaning/507/1). > In short, they said that Dipy normalizes to the unit integral whereas > MRtrix doesn't. > > > I'm sorry -- I might be misunderstanding what you -- but I am unable to > reproduce this. When I run this example: > > http://nipy.org/dipy/examples_built/reconst_csd.html > > setting the `sh_order` keyword argument of > `ConstrainedSphericalDeconvModel` to 4 (line 170), and then plotting the > first slice in the `sh_coeff` attribute of the fit object, corresponding to > Y(0, 0), I get a spatially varying function, as you would expect. > > Maybe you could tell me what exactly you are doing? Are you by any chance > looking at the basis set itself (`csd_model.B_dwi[..., 0]`)? That should be > a constant for Y(0,0) -- it's a sphere, after all. And indeed, it's all > equal to 1/sqrt(4pi). > > Cheers, > > Ariel > > As I said, my main goal is to use use mrregister and in particular with > the IIT HARDI template (https://www.nitrc.org/frs/?group_id=432) which > has the same MRtrix SH coefficient system (i.e. Y(0,0) is different for > every voxel). But to do that with Dipy, I imagine that I would need to have > similar SH coeff outputs for my data. I have tried playing with the Dipy > code to obtain similar results as with MRtrix (taking out the normalization > term in spherical_harmonics(m,n,theta,phi) in dipy/reconst/shm.py, etc.) > but I still keep getting a single valued brain for the Y(0,0) term that > equals 1/sqrt(4*pi). > > I was wondering if someone know where this difference in behavior is > coming from and where I could look to try to get MRtrix-type outputs for > the SH coeffs. > > Thank you for your time, > Eric > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From delacyn at u.washington.edu Thu Oct 20 15:30:50 2016 From: delacyn at u.washington.edu (Nina de Lacy) Date: Thu, 20 Oct 2016 12:30:50 -0700 (PDT) Subject: [Neuroimaging] Python 2x or 3x? Message-ID: Hi there: I'm a new-to-Python visitor from MATLAB/SPM land. I'd like to build an application that is essentially a python wrapper that integrates several APIs and can take SPM-generated masks as input. Does the group/list have any thoughts about Python 2x vs 3x and NiBabel in terms of integration with other APIs, integration with SPM and what the community is using out there? Thanks Nina This message and any attached files might contain confidential information protected by federal and state law. The information is intended only for the use of the individual(s) or entities originally named as addressees. The improper disclosure of such information may be subject to civil or criminal penalties. If this message reached you in error, please contact the sender and destroy this message. Disclosing, copying, forwarding, or distributing the information by unauthorized individuals or entities is strictly prohibited by law. From grlee77 at gmail.com Fri Oct 21 12:00:59 2016 From: grlee77 at gmail.com (Gregory Lee) Date: Fri, 21 Oct 2016 12:00:59 -0400 Subject: [Neuroimaging] Python 2x or 3x? In-Reply-To: References: Message-ID: Hi Nina, I would recommend starting with Python 3.x at this point. Although it took several years, most scientific python packages now support both 2.7 and 3.x at this point, so there is no real downside to starting with 3. Nibabel is definitely the way to go in terms of working with a range of neuroimaging file formats on the python side. In terms of working across different APIs, that sounds like a fit to nipype which already has wrappers for many software packages. On Thu, Oct 20, 2016 at 3:30 PM, Nina de Lacy wrote: > > > Hi there: > > I'm a new-to-Python visitor from MATLAB/SPM land. > > I'd like to build an application that is essentially a python wrapper that > integrates several APIs and can take SPM-generated masks as input. > > Does the group/list have any thoughts about Python 2x vs 3x and NiBabel in > terms of integration with other APIs, integration with SPM and what the > community is using out there? > > Thanks > Nina > > This message and any attached files might contain confidential information > protected by federal and state law. The information is intended only for > the use of the individual(s) or entities originally named as addressees. > The improper disclosure of such information may be subject to civil or > criminal penalties. If this message reached you in error, please contact > the sender and destroy this message. Disclosing, copying, forwarding, or > distributing the information by unauthorized individuals or entities is > strictly prohibited by law. > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Fri Oct 21 18:41:54 2016 From: arokem at gmail.com (Ariel Rokem) Date: Fri, 21 Oct 2016 15:41:54 -0700 Subject: [Neuroimaging] Postdoc in data science and neuroengineering at the University of Washington, Seattle Message-ID: Jason Yeatman (http://depts.washington.edu/bdelab/), and Ariel Rokem ( http://arokem.org/) are seeking scientists with a PhD in neuroscience, computer science, electrical engineering, statistics, psychology or related fields, for a collaborative postdoctoral position at the intersection of human neuroscience, data science and neuroengineering. Suitable candidates will have the opportunity to apply for the WRF postdoctoral fellowships in data science: http://escience.washington.edu/get-involved/postdoctoral-fellowships and/or in neuroengineering: http://uwin.washington.edu/post-docs/apply-post-docs/ Fellows appointed through these programs will be provided with annual salary support of $65,000 for up to two years and an additional stipend of $25,000 over the total period of the appointment that can be used for travel, equipment, software, undergraduate research assistants, or other research costs. The project focuses on the development of methods for analyzing multi-modal MRI data, and the application of these methods to questions pertaining to human brain development. The long-term goals of the project include development and maintenance of software for the analysis of large openly available datasets of human MRI, and the extraction of valuable information about the biological basis of human cognitive abilities from these data. This involves developing new algorithms for the analysis of diffusion MRI, tools for harnessing cloud-computing to analyze these datasets at scale, and interactive data visualizations (e.g., http://viz.afq-browser.org). The postdoc would have the opportunity to work within a large and international open-source development community (http://dipy.org), and would be encouraged to develop a portfolio of open and reproducible science. For inquiries please contact Prof. Yeatman (jyeatman at uw.edu) and Ariel Rokem (arokem at uw.edu). We will both be available to meet with potential candidates at the Society for Neuroscience meeting in San Diego. *The University of Washington is an affirmative action and equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, national origin, age, protected veteran or disabled status, or genetic information.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Fri Oct 21 12:20:50 2016 From: jbpoline at gmail.com (JB Poline) Date: Fri, 21 Oct 2016 09:20:50 -0700 Subject: [Neuroimaging] Fwd: 2 Post-Docs at University of Minnesota (USA) and ARAMISLab (France) In-Reply-To: References: Message-ID: Although not specifically python, this may be of interest to some of you cheers JB ---------- Forwarded message ---------- From: Olivier Colliot Date: 21 October 2016 at 05:37 Subject: 2 Post-Docs at University of Minnesota (USA) and ARAMISLab (France) To: Dear colleagues, Could you please help us spread the word about these positions ? 1 postdoc in Medical Image Computing in Paris, France 1 postdoc in MRI physics and MRI acquisition in Minneapolis, USA Best regards Olivier and Pierre-Fran?ois -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PostDoc_Brain7T_CMRR_CRCNS.pdf Type: application/pdf Size: 123892 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postdoc_HIPLAY7.pdf Type: application/pdf Size: 123244 bytes Desc: not available URL: From eric.moulton.jr at gmail.com Tue Oct 25 05:28:51 2016 From: eric.moulton.jr at gmail.com (Eric Moulton) Date: Tue, 25 Oct 2016 11:28:51 +0200 Subject: [Neuroimaging] [Dipy] discrepancies of SH coeffs with MRtrix In-Reply-To: References: Message-ID: Hi everyone, Thanks to all those who answered me. It cleared a lot of stuff up for me. I spent some time playing with these functions to get the hang of it. Indeed, I performed the ConstrainedSphericalDeconvModel model and it does give me a spatially varying Y(0,0) volume, just like in MRtrix. That being said, I tried computing the SH_coeff volumes using similar algorithms in MRtrix and Dipy, and although I get close results, I do not get the same. From my understanding, Dipy seems to employ the Tax algorithm for calculating the response function, so that's what I tried to do in MRtrix. I started out with a 4D Dwi file (eddy corrected, etc.), a brain mask, and the bvecs and bvals. The parts of my code in bold are the parts where I tried to put in the exact same parameters for each program. The only extra options that could be responsible for the differences are in Dipy, where I have to give an initial FA and MD value. ************************* *MRtrix Code* mrconvert dwi.nii.gz dwi.mif mrconvert dwi_mask.nii.gz dwi_mask.mif dwi2response tax *-peak_ratio 0.01 -max_iters 8 -convergence 0.001 -lmax 4 -mask dwi_mask.mif* -fslgrad subj.bvecs subj.bvals -force dwi.mif response_tax.txt dwi2fod -fslgrad subj.bvecs subj.bvals -lmax 4 -mask mask.mif -force csd dwi.mif response_tax.txt odf_tax.mif *Dipy Code* import numpy as np import nibabel as nib import os from dipy.data import get_sphere from dipy.reconst.shm import CsaOdfModel, from dipy.direction import peaks_from_model from dipy.tracking import utils from dipy.reconst.csdeconv import recursive_response from dipy.io import read_bvals_bvecs from dipy.core.gradients import gradient_table from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel base_rep='/path/to/files' fimg = os.path.join(base_rep,'dwi.nii.gz') fbvecs = os.path.join(base_rep,'subj.bvecs') fbvals = os.path.join(base_rep,'subj.bvals') img = nib.load(fimg) dwi = img.get_data() fdwi_mask = os.path.join(base_rep,'dwi_mask.nii.gz') dwi_mask = nib.load(fdwi_mask).get_data() dwi_mask = dwi_mask > 0 affine = img.affine header = img.header from dipy.io import read_bvals_bvecs bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) from dipy.core.gradients import gradient_table gtab = gradient_table(bvals, bvecs) sphere = get_sphere('symmetric724') response = recursive_response(gtab,dwi,*mask=dwi_mask*,sh_order=4, *peak_thr=0.01,iter=8,convergence = 0.001,* init_fa=0.05,init_trace=0.0021, parallel=True,sphere=sphere) csdmodel = ConstrainedSphericalDeconvModel(gtab,response,sh_order=4) SH_coeff = csdmodel.fit(dwi,mask=dwi_mask).shm_coeff #some trickery to put the coefficients in the same order as MRtrix SH_coeff_mrtrix = np.zeros(SH_coeff.shape) SH_coeff_mrtrix[...,0] = SH_coeff[...,0] SH_coeff_mrtrix[...,1:6] = SH_coeff[...,5:0:-1] SH_coeff_mrtrix[...,6:15] = SH_coeff[...,14:5:-1] nib.save(nib.Nifti1Image(SH_coeff_mrtrix,affine,header), os.path.join(base_rep,'odf_tax_dipy.nii.gz')) ************************* When I open odf_tax.mif and odf_tax_dipy.nii.gz in mrview, I don't get the exact same thing, but both volumes are fairly similar upon visual inspection of each component and also the ODFs (although in areas of crossing fibers, they can sometimes be more or less "sharper", but sometimes MRtrix OR Dipy produces the sharper one depending on the location). Attached is a histogram of the differences of the images (MRtrix - Dipy), masked by the brain_mask so we don't get any 0 voxels outside the brain. Each component is the Y(l,m) volumes in MRtrix3 basis (i.e. 0th = Y(0,0), 1st = Y(2,-2), 2nd = Y(2,-1), etc.). I was wondering what you guys thought about my code to generate the spherical deconvolutions and if these differences are worrying. Also, is it unrealistic to hope to get the exact same result using two different softwares with lots of behind the scenes calculations? Best regards, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mrtrix-dipy_lmax4.png Type: image/png Size: 164416 bytes Desc: not available URL: From stjeansam at gmail.com Tue Oct 25 11:42:33 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Tue, 25 Oct 2016 17:42:33 +0200 Subject: [Neuroimaging] [Dipy] discrepancies of SH coeffs with MRtrix In-Reply-To: References: Message-ID: Regarding that last comment, it is unfortunately unrealistic to get *exactly* the same result form two different software in general, but for this specific case there is a difference in the implementation from mrtrix and dipy (and everything else they use under the hood of course). For the recursive calibration computation, the one in mrtrix is differently implemented from Chantal Tax's original paper (they mention the differences and why here [1] in the relevant section) and the one in dipy was made by Chantal herself, so it should fit the theoretical description and the implementation used in explore dti (which I assume is also made by Chantal). [1] http://mrtrix.readthedocs.io/en/latest/concepts/response_function_estimation.html 2016-10-25 11:28 GMT+02:00 Eric Moulton : > Hi everyone, > > Thanks to all those who answered me. It cleared a lot of stuff up for me. > I spent some time playing with these functions to get the hang of it. > Indeed, I performed the ConstrainedSphericalDeconvModel model and it does > give me a spatially varying Y(0,0) volume, just like in MRtrix. > > That being said, I tried computing the SH_coeff volumes using similar > algorithms in MRtrix and Dipy, and although I get close results, I do not > get the same. From my understanding, Dipy seems to employ the Tax algorithm > for calculating the response function, so that's what I tried to do in > MRtrix. I started out with a 4D Dwi file (eddy corrected, etc.), a brain > mask, and the bvecs and bvals. > > The parts of my code in bold are the parts where I tried to put in the > exact same parameters for each program. The only extra options that could > be responsible for the differences are in Dipy, where I have to give an > initial FA and MD value. > > ************************* > *MRtrix Code* > mrconvert dwi.nii.gz dwi.mif > mrconvert dwi_mask.nii.gz dwi_mask.mif > dwi2response tax *-peak_ratio 0.01 -max_iters 8 -convergence 0.001 -lmax > 4 -mask dwi_mask.mif* -fslgrad subj.bvecs subj.bvals -force dwi.mif > response_tax.txt > dwi2fod -fslgrad subj.bvecs subj.bvals -lmax 4 -mask mask.mif -force csd > dwi.mif response_tax.txt odf_tax.mif > > *Dipy Code* > import numpy as np > import nibabel as nib > import os > from dipy.data import get_sphere > from dipy.reconst.shm import CsaOdfModel, > from dipy.direction import peaks_from_model > from dipy.tracking import utils > from dipy.reconst.csdeconv import recursive_response > from dipy.io import read_bvals_bvecs > from dipy.core.gradients import gradient_table > from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel > > base_rep='/path/to/files' > fimg = os.path.join(base_rep,'dwi.nii.gz') > fbvecs = os.path.join(base_rep,'subj.bvecs') > fbvals = os.path.join(base_rep,'subj.bvals') > img = nib.load(fimg) > dwi = img.get_data() > fdwi_mask = os.path.join(base_rep,'dwi_mask.nii.gz') > dwi_mask = nib.load(fdwi_mask).get_data() > dwi_mask = dwi_mask > 0 > affine = img.affine > header = img.header > from dipy.io import read_bvals_bvecs > bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) > from dipy.core.gradients import gradient_table > gtab = gradient_table(bvals, bvecs) > sphere = get_sphere('symmetric724') > > response = recursive_response(gtab,dwi,*mask=dwi_mask*,sh_order=4, > *peak_thr=0.01,iter=8,convergence = 0.001,* > init_fa=0.05,init_trace=0.0021, > parallel=True,sphere=sphere) > > csdmodel = ConstrainedSphericalDeconvModel(gtab,response,sh_order=4) > SH_coeff = csdmodel.fit(dwi,mask=dwi_mask).shm_coeff > > #some trickery to put the coefficients in the same order as MRtrix > SH_coeff_mrtrix = np.zeros(SH_coeff.shape) > SH_coeff_mrtrix[...,0] = SH_coeff[...,0] > SH_coeff_mrtrix[...,1:6] = SH_coeff[...,5:0:-1] > SH_coeff_mrtrix[...,6:15] = SH_coeff[...,14:5:-1] > > nib.save(nib.Nifti1Image(SH_coeff_mrtrix,affine,header),os. > path.join(base_rep,'odf_tax_dipy.nii.gz')) > ************************* > > > When I open odf_tax.mif and odf_tax_dipy.nii.gz in mrview, I don't get the > exact same thing, but both volumes are fairly similar upon visual > inspection of each component and also the ODFs (although in areas of > crossing fibers, they can sometimes be more or less "sharper", but > sometimes MRtrix OR Dipy produces the sharper one depending on the > location). > > Attached is a histogram of the differences of the images (MRtrix - Dipy), > masked by the brain_mask so we don't get any 0 voxels outside the brain. > Each component is the Y(l,m) volumes in MRtrix3 basis (i.e. 0th = Y(0,0), > 1st = Y(2,-2), 2nd = Y(2,-1), etc.). > > I was wondering what you guys thought about my code to generate the > spherical deconvolutions and if these differences are worrying. Also, is it > unrealistic to hope to get the exact same result using two different > softwares with lots of behind the scenes calculations? > > Best regards, > Eric > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrei.ioan.barsan at gmail.com Tue Oct 25 14:29:41 2016 From: andrei.ioan.barsan at gmail.com (=?UTF-8?Q?Ioan=2DAndrei_B=C3=A2rsan?=) Date: Tue, 25 Oct 2016 18:29:41 +0000 Subject: [Neuroimaging] Strange offset plotting MRIs with plot_glass_brain Message-ID: Dear members of the Python Neuroimaging list, I'm having some difficulties analyzing some MRI data using nilearn, and I would be very grateful for some pointers. I'm using nilearn (nipype==0.12.1). Right now, I'm working on visualizing some things, and later I plan on doing age predictions using regression. However, when I plot my own data using e.g. `nilearn.plotting.plot_glass_brain` I notice a strange offset. I've attached a screenshot showing what happens. This does not happen when using e.g. data from the sample OASIS dataset, in which case the brain aligns correctly with the template. Could it be that the images use different coordinate system conventions? I've read about MNI vs Talairach coordinates, and how they differ, but I'm not sure whether this really is the problem I'm having, or how I should proceed in order to gain more information. My .nii files 'headers don't seem to specify what kind of coordinates are being used. I've attached the contents of a sample header in case there's something there that I'm missing. I would be very grateful for some advice. Please let me know if I should provide any other information! Kind regards, Andrei B?rsan [image: Screen Shot 2016-10-25 at 7.41.55 PM.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2016-10-25 at 7.41.55 PM.png Type: image/png Size: 728007 bytes Desc: not available URL: -------------- next part -------------- object, endian='<' sizeof_hdr : 348 data_type : b'' db_name : b'' extents : 16384 session_error : 0 regular : b'r' dim_info : 0 dim : [ 4 176 208 176 1 1 1 1] intent_p1 : 0.0 intent_p2 : 0.0 intent_p3 : 0.0 intent_code : none datatype : int16 bitpix : 16 slice_start : 0 pixdim : [ 1. 1. 1. 1. 0. 1. 1. 1.] vox_offset : 0.0 scl_slope : nan scl_inter : nan slice_end : 0 slice_code : unknown xyzt_units : 0 cal_max : 0.0 cal_min : 0.0 slice_duration : 0.0 toffset : 0.0 glmax : 2965 glmin : 0 descrip : b' ' aux_file : b' ' qform_code : unknown sform_code : unknown quatern_b : 0.0 quatern_c : 0.0 quatern_d : 0.0 qoffset_x : 0.0 qoffset_y : 0.0 qoffset_z : 0.0 srow_x : [ 0. 0. 0. 0.] srow_y : [ 0. 0. 0. 0.] srow_z : [ 0. 0. 0. 0.] intent_name : b'' magic : b'n+1' From krzysztof.gorgolewski at gmail.com Tue Oct 25 14:43:36 2016 From: krzysztof.gorgolewski at gmail.com (Chris Gorgolewski) Date: Tue, 25 Oct 2016 11:43:36 -0700 Subject: [Neuroimaging] Strange offset plotting MRIs with plot_glass_brain In-Reply-To: References: Message-ID: Yeah it seems that this file is not in MNI - did you do any coregistration/normalization prior to plotting? On Tue, Oct 25, 2016 at 11:29 AM, Ioan-Andrei B?rsan < andrei.ioan.barsan at gmail.com> wrote: > Dear members of the Python Neuroimaging list, > > I'm having some difficulties analyzing some MRI data using nilearn, and I > would be very grateful for some pointers. > > I'm using nilearn (nipype==0.12.1). Right now, I'm working on visualizing > some things, and later I plan on doing age predictions using regression. > > However, when I plot my own data using e.g. `nilearn.plotting.plot_glass_brain` > I notice a strange offset. I've attached a screenshot showing what happens. > This does not happen when using e.g. data from the sample OASIS dataset, in > which case the brain aligns correctly with the template. > > Could it be that the images use different coordinate system conventions? > I've read about MNI vs Talairach coordinates, and how they differ, but I'm > not sure whether this really is the problem I'm having, or how I should > proceed in order to gain more information. > > My .nii files 'headers don't seem to specify what kind of coordinates are > being used. I've attached the contents of a sample header in case there's > something there that I'm missing. > > I would be very grateful for some advice. Please let me know if I should > provide any other information! > > Kind regards, > Andrei B?rsan > > [image: Screen Shot 2016-10-25 at 7.41.55 PM.png] > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2016-10-25 at 7.41.55 PM.png Type: image/png Size: 728007 bytes Desc: not available URL: From bertrand.thirion at inria.fr Tue Oct 25 14:45:06 2016 From: bertrand.thirion at inria.fr (bthirion) Date: Tue, 25 Oct 2016 20:45:06 +0200 Subject: [Neuroimaging] Strange offset plotting MRIs with plot_glass_brain In-Reply-To: References: Message-ID: <65050d7b-b0cf-d204-41fb-96c019c7c4f7@inria.fr> Dear Ioan -Andrei, You images are not spatially normalized, i.e. the position in mm does not correspond to MNI nor to Talairach coordinates. They at least need a non-trivial trnslation to reach one of those coordinate ssytem; you probably want to revise the pipeline used to obtain these images. Best, Bertrand On 25/10/2016 20:29, Ioan-Andrei B?rsan wrote: > Dear members of the PythonNeuroimaging list, > > I'm having some difficulties analyzing some MRI data using nilearn, > and I would be very grateful for some pointers. > > I'm using nilearn (nipype==0.12.1). Right now, I'm working on > visualizing some things, and later I plan on doing age predictions > using regression. > > However, when I plot my own data using e.g. > `nilearn.plotting.plot_glass_brain` I notice a strange offset. I've > attached a screenshot showing what happens. This does not happen when > using e.g. data from the sample OASIS dataset, in which case the brain > aligns correctly with the template. > > Could it be that the images use different coordinate system > conventions? I've read about MNI vs Talairach coordinates, and how > they differ, but I'm not sure whether this really is the problem I'm > having, or how I should proceed in order to gain more information. > > My .nii files 'headers don't seem to specify what kind of coordinates > are being used. I've attached the contents of a sample header in case > there's something there that I'm missing. > > I would be very grateful for some advice. Please let me know if I > should provide any other information! > > Kind regards, > Andrei B?rsan > > Screen Shot 2016-10-25 at 7.41.55 PM.png > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 728007 bytes Desc: not available URL: From ibmalone at gmail.com Mon Oct 31 11:54:38 2016 From: ibmalone at gmail.com (Ian Malone) Date: Mon, 31 Oct 2016 15:54:38 +0000 Subject: [Neuroimaging] Slow node starts with nipype Message-ID: Hi, I've been given some nipype workflows to run, and I find that they seem to go very slowly in between execution steps. For example, I'm currently looking at the message: 161031-15:42:50,278 workflow INFO: Running: pm_scale -i /var/lib/midas/data/lha1946/images/nii/original/7938.nii.gz -o /var/drc/scratch1/malone/dti-test-masks-20161031/outgif/13839424_01_PETMR_20161019-wd/dmri_workflow/susceptibility_correction_with_fm/pm_scale/7938_scaled.nii.gz But the command in question doesn't seem to be running (this is a fairly quick command if executed directly). There's a similar hold-up for other steps and, with quite a long multi-step pipeline, a lot of time seems to be spent doing nothing (watching top, similarly a list of fslmath nodes all launches, the fslmaths processes show up briefly in top, then there's a delay before anything more happens). Does anyone have suggestions for what might be happening? This is using the MultiProc engine, storage is NFS. -- imalone From satra at mit.edu Mon Oct 31 12:02:27 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Mon, 31 Oct 2016 12:02:27 -0400 Subject: [Neuroimaging] Slow node starts with nipype In-Reply-To: References: Message-ID: dear ian, there is a config option to sleep between nodes that was changed from 2s to 60s. you can reset that option to speed things up. poll_sleep_duration http://nipype.readthedocs.io/en/latest/users/config_file.html you can set this to: wf_variable.config['execution']['poll_sleep_duraion'] = 2 cheers, satra On Mon, Oct 31, 2016 at 11:54 AM, Ian Malone wrote: > Hi, > > I've been given some nipype workflows to run, and I find that they > seem to go very slowly in between execution steps. For example, I'm > currently looking at the message: > > 161031-15:42:50,278 workflow INFO: > Running: pm_scale -i > /var/lib/midas/data/lha1946/images/nii/original/7938.nii.gz -o > /var/drc/scratch1/malone/dti-test-masks-20161031/outgif/ > 13839424_01_PETMR_20161019-wd/dmri_workflow/susceptibility_ > correction_with_fm/pm_scale/7938_scaled.nii.gz > > But the command in question doesn't seem to be running (this is a > fairly quick command if executed directly). There's a similar hold-up > for other steps and, with quite a long multi-step pipeline, a lot of > time seems to be spent doing nothing (watching top, similarly a list > of fslmath nodes all launches, the fslmaths processes show up briefly > in top, then there's a delay before anything more happens). > > Does anyone have suggestions for what might be happening? This is > using the MultiProc engine, storage is NFS. > > -- > imalone > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Mon Oct 31 12:07:59 2016 From: jbpoline at gmail.com (JB Poline) Date: Mon, 31 Oct 2016 09:07:59 -0700 Subject: [Neuroimaging] Slow node starts with nipype In-Reply-To: References: Message-ID: Hi Satra I'm just curious: What was the rationale for increasing this time from 2 to 30s ? cheers JB On 31 October 2016 at 09:02, Satrajit Ghosh wrote: > dear ian, > > there is a config option to sleep between nodes that was changed from 2s to > 60s. you can reset that option to speed things up. > > poll_sleep_duration > > http://nipype.readthedocs.io/en/latest/users/config_file.html > > you can set this to: > > wf_variable.config['execution']['poll_sleep_duraion'] = 2 > > cheers, > > satra > > On Mon, Oct 31, 2016 at 11:54 AM, Ian Malone wrote: >> >> Hi, >> >> I've been given some nipype workflows to run, and I find that they >> seem to go very slowly in between execution steps. For example, I'm >> currently looking at the message: >> >> 161031-15:42:50,278 workflow INFO: >> Running: pm_scale -i >> /var/lib/midas/data/lha1946/images/nii/original/7938.nii.gz -o >> >> /var/drc/scratch1/malone/dti-test-masks-20161031/outgif/13839424_01_PETMR_20161019-wd/dmri_workflow/susceptibility_correction_with_fm/pm_scale/7938_scaled.nii.gz >> >> But the command in question doesn't seem to be running (this is a >> fairly quick command if executed directly). There's a similar hold-up >> for other steps and, with quite a long multi-step pipeline, a lot of >> time seems to be spent doing nothing (watching top, similarly a list >> of fslmath nodes all launches, the fslmaths processes show up briefly >> in top, then there's a delay before anything more happens). >> >> Does anyone have suggestions for what might be happening? This is >> using the MultiProc engine, storage is NFS. >> >> -- >> imalone >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > From satra at mit.edu Mon Oct 31 12:26:53 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Mon, 31 Oct 2016 12:26:53 -0400 Subject: [Neuroimaging] Slow node starts with nipype In-Reply-To: References: Message-ID: hi jb, it was because of people's usage patterns on clusters. however, we will likely change the default back down in the upcoming release. cheers, satra On Mon, Oct 31, 2016 at 12:07 PM, JB Poline wrote: > Hi Satra > > I'm just curious: What was the rationale for increasing this time from > 2 to 30s ? > > cheers > JB > > On 31 October 2016 at 09:02, Satrajit Ghosh wrote: > > dear ian, > > > > there is a config option to sleep between nodes that was changed from 2s > to > > 60s. you can reset that option to speed things up. > > > > poll_sleep_duration > > > > http://nipype.readthedocs.io/en/latest/users/config_file.html > > > > you can set this to: > > > > wf_variable.config['execution']['poll_sleep_duraion'] = 2 > > > > cheers, > > > > satra > > > > On Mon, Oct 31, 2016 at 11:54 AM, Ian Malone wrote: > >> > >> Hi, > >> > >> I've been given some nipype workflows to run, and I find that they > >> seem to go very slowly in between execution steps. For example, I'm > >> currently looking at the message: > >> > >> 161031-15:42:50,278 workflow INFO: > >> Running: pm_scale -i > >> /var/lib/midas/data/lha1946/images/nii/original/7938.nii.gz -o > >> > >> /var/drc/scratch1/malone/dti-test-masks-20161031/outgif/ > 13839424_01_PETMR_20161019-wd/dmri_workflow/susceptibility_ > correction_with_fm/pm_scale/7938_scaled.nii.gz > >> > >> But the command in question doesn't seem to be running (this is a > >> fairly quick command if executed directly). There's a similar hold-up > >> for other steps and, with quite a long multi-step pipeline, a lot of > >> time seems to be spent doing nothing (watching top, similarly a list > >> of fslmath nodes all launches, the fslmaths processes show up briefly > >> in top, then there's a delay before anything more happens). > >> > >> Does anyone have suggestions for what might be happening? This is > >> using the MultiProc engine, storage is NFS. > >> > >> -- > >> imalone > >> _______________________________________________ > >> Neuroimaging mailing list > >> Neuroimaging at python.org > >> https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibmalone at gmail.com Mon Oct 31 12:37:21 2016 From: ibmalone at gmail.com (Ian Malone) Date: Mon, 31 Oct 2016 16:37:21 +0000 Subject: [Neuroimaging] Slow node starts with nipype In-Reply-To: References: Message-ID: On 31 October 2016 at 16:02, Satrajit Ghosh wrote: > On Mon, Oct 31, 2016 at 11:54 AM, Ian Malone wrote: >> >> Hi, >> >> I've been given some nipype workflows to run, and I find that they >> seem to go very slowly in between execution steps. For example, I'm >> currently looking at the message: >> >> Does anyone have suggestions for what might be happening? This is >> using the MultiProc engine, storage is NFS. >> > dear ian, > > there is a config option to sleep between nodes that was changed from 2s to > 60s. you can reset that option to speed things up. > > poll_sleep_duration > > http://nipype.readthedocs.io/en/latest/users/config_file.html > > you can set this to: > > wf_variable.config['execution']['poll_sleep_duraion'] = 2 > That is so much better, thank you! -- imalone