From tyarkoni at gmail.com Mon Apr 2 11:23:49 2018 From: tyarkoni at gmail.com (Tal Yarkoni) Date: Mon, 2 Apr 2018 10:23:49 -0500 Subject: [Neuroimaging] Multiple postdoctoral positions available at the University of Texas at Austin Message-ID: Dear All, Please see the ad below. All of these positions are expected to involve extensive data analysis and/or scientific software development in Python. ----------------------------------------------------------------- Up to three postdoctoral fellow positions are available in the Department of Psychology at UT-Austin. Positions will primarily be based in the Psychoinformatics Lab (PI, Dr. Tal Yarkoni; http://pilab.psy.utexas.edu), but postdocs will have extensive opportunities to collaborate with other members of the neuroimaging, data science, and informatics communities at UT-Austin. Collaborating PIs include Dr. Cameron Craddock (Diagnostic Medicine, Dell Medical School), Dr. Alex Huth (Neuroscience and Computer Science), and Dr. Elliot Tucker-Drob (Psychology), among others. The specific duties associated with these positions are flexible, but each will likely include at least one of the following areas of primary focus: * Development and application of the next generation of the Neurosynth framework for large-scale meta-analysis (neurosynth.org); * Development of new methods and tools for studying the neural bases of language and conceptual understanding using naturalistic fMRI paradigms (in collaboration with Dr. Alex Huth); * Application of machine learning techniques to large neuroimaging and genomic databases in order to advance prediction and understanding of age-related changes in brain structure and function (in collaboration with Dr. Elliot Tucker-Drob); * Development of data sharing infrastructure and reproducible neuroimaging analysis pipelines for the UT and broader neuroimaging communities (in collaboration with Dr. Cameron Craddock). Applicants should have a strong background in fMRI methods, scientific software development, and/or machine learning. Prior background in neuroscience or psychology is strongly preferred, as is experience working in Python. Funding is available immediately, but start dates are flexible and may be as late as January 2019. Funding is available for up to five years. Applicants must formally hold a Ph.D. degree before commencing their duties or else UT-Austin will refuse to pay you, and then you would be sad. To apply, please email a CV, a brief (1-page) statement of research interests, and contact information for at least 3 references to Tal Yarkoni (tyarkoni at utexas.edu). Questions about the positions are welcome. Applications will be considered until all positions are filled. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryuvaraj at ntu.edu.sg Mon Apr 2 22:25:32 2018 From: ryuvaraj at ntu.edu.sg (Yuvaraj Rajamanickam (Dr)) Date: Tue, 3 Apr 2018 02:25:32 +0000 Subject: [Neuroimaging] PRNI 2018: THE SUBMISSION DEADLINE HAS BEEN EXTENDED TO 18 APRIL, 2018 !!!! Message-ID: <3E9B0165C01BA047A1AFFBA5B9161C415E37D3CF@EXCHMBOX34.staff.main.ntu.edu.sg> PRNI 2018: THE SUBMISSION DEADLINE HAS BEEN EXTENDED TO 18th APRIL, 2018 !!!! ******* please accept our apologies for cross-posting ******* -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CALL FOR PAPERS AND TUTORIALS PRNI 2018: 8th International Workshop on Pattern Recognition in Neuroimaging to be held 12-14 June 2018 at the National University of Singapore, Singapore www.prni.org - @PRNIworkshop - www.facebook.com/PRNIworkshop/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The 8th International Workshop on Pattern Recognition in Neuroimaging (PRNI) will be held at the Centre for Life Sciences Auditorium, National University of Singapore, Singapore on June 12-14, 2018.Pattern recognition techniques are an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. Authors should prepare full papers with a maximum length of 4 pages (two column IEEE style) for double-blind review. The PAPER submission deadline has been extended until WEDNESDAY ,18 APRIL 2018 !, 11:59 pm SGT. Accepted manuscripts will be assigned either to an oral or poster sessions. All accepted manuscripts will be included in the IEEE Xplore digital library. Similarly to previous years, in addition to full length papers PRNI will also accept short abstracts (word count not including the title, author list, tables, captions, or references) for poster presentation. Closing date: 04-MAY-2018 Open call for tutorial proposals: Finally, this year PRNI has an open call for tutorial proposals. A tutorial can take a form of 2h, 4h or whole day event aimed at demonstrating a computational technique, software tool, or specific concept. Tutorial proposals featuring hands on demonstrations and promoting diversity (e.g. gender, background, institution) will be preferred. PRNI will cover conference registration fees for up to two tutors per accepted program. The TUTORIAL submission deadline has been extended until WEDNESDAY ,18 APRIL 2018 !, 11:59 pm SGT Please see www.prni.org and follow @PRNIworkshop and www.facebook.com/PRNIworkshop/ for news and details. ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruben.branco at di.fc.ul.pt Tue Apr 3 09:03:56 2018 From: ruben.branco at di.fc.ul.pt (Ruben Branco) Date: Tue, 3 Apr 2018 14:03:56 +0100 Subject: [Neuroimaging] Troubles with plotting of data from fMRI Message-ID: Dear all, I was recently handed a task with a component which has to do with neuro imaging, completely left-field from my field, which is NLP. I have been trying to plot activation in the brain from fMRI data using the package Nilearn, but to no avail. The data structure is in an old format, SPM99, containing an array of vectors, each vector having 3 integers, which are the coordinates in voxel space for each voxel. I also have the affinity matrix and the activation values for each voxel. When trying to visualize it using glass brain plotting, I am unsure where to include activation and surprisingly, without even accounting for activation values, just providing the voxel data and the affinity matrix yields a single line, which seems like the way I am providing the image only accounts for one slice. Despite being an SPM99 image, I have tried to convert it to Nifti and the same line was the output. I was wondering whether anyone could spot the culprit in the process that is causing these issues and perhaps even pointing me in the right direction with relevant articles. The articles I have read regarding this issue and the specification of the Nifti format were unhelpful, perhaps(or definitely!) because of my inexperience with the field. I apologize if this sounds like a very naive plead of help, it's something I've never have dealt with before that's for sure! Thank you so much for your time, Ruben Branco University of Lisbon NLX - Natural Language and Speech Group, Department of Informatics Faculdade de Ci?ncias From christophe at pallier.org Tue Apr 3 09:23:36 2018 From: christophe at pallier.org (Christophe Pallier) Date: Tue, 3 Apr 2018 15:23:36 +0200 Subject: [Neuroimaging] Troubles with plotting of data from fMRI In-Reply-To: References: Message-ID: If I understand correctly, you do not have the map as an image (analyze: img, hdr, ot nifti: nii), but only the values of a stat for suprathreshold voxels as a series of triplets (coordinates) and values. It looks like you only have the SPM.mat file but it would be nicer to have the spmT*.{img,hdr} and con*.{img,hdr} files. Check first if you can have those. If you only have the SPM.mat file, you could create a Niftiimage by creating a 3D numpy array and filling the voxels with the values. Then, once you have the image, you will be able to use plot_glass_brain. Hope this helps and is not too off the mark. On Tue, Apr 3, 2018 at 3:03 PM, Ruben Branco wrote: > Dear all, > > I was recently handed a task with a component which has to do with neuro > imaging, completely left-field from my field, which is NLP. I have been > trying to plot activation in the brain from fMRI data using the package > Nilearn, but to no avail. > > The data structure is in an old format, SPM99, containing an array of > vectors, each vector having 3 integers, which are the coordinates in voxel > space for each voxel. I also have the affinity matrix and the activation > values for each voxel. When trying to visualize it using glass brain > plotting, I am unsure where to include activation and surprisingly, without > even accounting for activation values, just providing the voxel data and > the affinity matrix yields a single line, which seems like the way I am > providing the image only accounts for one slice. Despite being an SPM99 > image, I have tried to convert it to Nifti and the same line was the output. > > I was wondering whether anyone could spot the culprit in the process that > is causing these issues and perhaps even pointing me in the right direction > with relevant articles. The articles I have read regarding this issue and > the specification of the Nifti format were unhelpful, perhaps(or > definitely!) because of my inexperience with the field. > > I apologize if this sounds like a very naive plead of help, it's something > I've never have dealt with before that's for sure! > > Thank you so much for your time, > > Ruben Branco > University of Lisbon > NLX - Natural Language and Speech Group, Department of Informatics > Faculdade de Ci?ncias > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- -- Christophe Pallier INSERM-CEA Cognitive Neuroimaging Lab, Neurospin, bat 145, 91191 Gif-sur-Yvette Cedex, France Tel: 00 33 1 69 08 79 34 Personal web site: http://www.pallier.org Lab web site: http://www.unicog.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruben.branco at di.fc.ul.pt Tue Apr 3 10:29:10 2018 From: ruben.branco at di.fc.ul.pt (Ruben Branco) Date: Tue, 3 Apr 2018 15:29:10 +0100 Subject: [Neuroimaging] Troubles with plotting of data from fMRI In-Reply-To: References: Message-ID: <83c10925-d91c-f08e-fe42-67b1d2e57629@di.fc.ul.pt> Dear Cristophe, I indeed only have the SPM.mat and have no access to the map that you have mentioned. I did, however, revisit documentation and realized that I was misinterpreting the some of the information provided with the SPM.mat, and built the 3D array just like you recommended and it indeed worked. Thank you very much for your help! Best Regards, Ruben Branco University of Lisbon NLX - Natural Language and Speech Group, Department of Informatics Faculdade de Ci?ncias On 04/03/2018 02:23 PM, Christophe Pallier wrote: > If I understand correctly, you do not have the map as an image > (analyze: img, hdr, ot nifti: nii), but only the values of a stat for > suprathreshold voxels as a series of triplets (coordinates) and values. > > It looks like you only have the SPM.mat file but it would be nicer to > have the spmT*.{img,hdr} and con*.{img,hdr} files. Check first if you > can have those. > > If you only have the SPM.mat file, you could create a Niftiimage by > creating a 3D numpy array and filling the voxels with the values. > Then, once you have the image, you will be able to use plot_glass_brain. > > Hope this helps and is not too off the mark. > > > On Tue, Apr 3, 2018 at 3:03 PM, Ruben Branco > wrote: > > Dear all, > > I was recently handed a task with a component which has to do with > neuro imaging, completely left-field from my field, which is NLP. > I have been trying to plot activation in the brain from fMRI data > using the package Nilearn, but to no avail. > > The data structure is in an old format, SPM99, containing an array > of vectors, each vector having 3 integers, which are the > coordinates in voxel space for each voxel. I also have the > affinity matrix and the activation values for each voxel. When > trying to visualize it using glass brain plotting, I am unsure > where to include activation and surprisingly, without even > accounting for activation values, just providing the voxel data > and the affinity matrix yields a single line, which seems like the > way I am providing the image only accounts for one slice. Despite > being an SPM99 image, I have tried to convert it to Nifti and the > same line was the output. > > I was wondering whether anyone could spot the culprit in the > process that is causing these issues and perhaps even pointing me > in the right direction with relevant articles. The articles I have > read regarding this issue and the specification of the Nifti > format were unhelpful, perhaps(or definitely!) because of my > inexperience with the field. > > I apologize if this sounds like a very naive plead of help, it's > something I've never have dealt with before that's for sure! > > Thank you so much for your time, > > Ruben Branco > University of Lisbon > NLX - Natural Language and Speech Group, Department of Informatics > Faculdade de Ci?ncias > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > -- > -- > Christophe Pallier > > INSERM-CEA Cognitive Neuroimaging Lab, Neurospin, bat 145, > 91191 Gif-sur-Yvette Cedex, France > Tel: 00 33 1 69 08 79 34 > Personal web site: http://www.pallier.org > Lab web site: http://www.unicog.org > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksitek at mit.edu Tue Apr 3 14:03:01 2018 From: ksitek at mit.edu (Kevin R Sitek) Date: Tue, 3 Apr 2018 18:03:01 +0000 Subject: [Neuroimaging] dipy target streamline filtering Message-ID: <95229795-5100-4D44-8B55-5E71EDB2DBE5@mit.edu> Hello, I?m generating whole-sample streamlines with dipy?s CSD model. I?d like to also filter streamlines by region of interest, but running dipy.tracking.utils.target I?m getting IndexError: streamline has points that map to negative voxel indices in the helper function _to_voxel_coordinates [I originally posted this issue on neurostars: https://neurostars.org/t/filtering-streamlines-with-dipy-target/1493] Since I?m using the same affine as the DWI data the streamlines were generated from, and since the target mask looks fine relative to the diffusion image, I?m wondering how the negative voxel indices could arise. I am running target like this: from dipy.tracking.utils import target # read in streamlines from nibabel import trackvis as tv streams_in_orig, hdr = tv.read(streamlines) streams_in = list(streams_in_orig) # streams_in is [[array, None, None], ...] streams = [] for s in streams_in: streams.append(s[0]) target_mask_bool = np.array(target_mask.get_data(), dtype=bool, copy=True) target_sl_generator = target(streams, target_mask_bool, affine, include=True) target_streamlines = list(target_sl_generator) where the streamlines .trk file had been generated as below: eu = EuDX(peaks.gfa, peaks.peak_indices[..., 0], odf_vertices = sphere.vertices, seeds=10**6, ang_thr=45) streamlines = ((sl, None, None) for sl in eu) hdr = nib.trackvis.empty_header() hdr['voxel_size'] = fa_img.get_header().get_zooms()[:3] hdr['voxel_order'] = 'LAS' hdr['dim'] = FA.shape[:3] sl_fname = os.path.abspath('streamline.trk') nib.trackvis.write(sl_fname, streamlines, hdr, points_space='voxel') I have the same issue when hdr['voxel_order'] = 'RAS' and when points_space=None. Any pointers would be appreciated! Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryuvaraj at ntu.edu.sg Wed Apr 4 10:23:48 2018 From: ryuvaraj at ntu.edu.sg (Yuvaraj Rajamanickam (Dr)) Date: Wed, 4 Apr 2018 14:23:48 +0000 Subject: [Neuroimaging] PRNI 2018: THE SUBMISSION DEADLINE HAS BEEN EXTENDED TO 18 APRIL, 2018 !!!! Message-ID: <3E9B0165C01BA047A1AFFBA5B9161C415E37DC44@EXCHMBOX34.staff.main.ntu.edu.sg> PRNI 2018: THE SUBMISSION DEADLINE HAS BEEN EXTENDED TO 18th APRIL, 2018 !!!! ******* please accept our apologies for cross-posting ******* -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CALL FOR PAPERS AND TUTORIALS PRNI 2018: 8th International Workshop on Pattern Recognition in Neuroimaging to be held 12-14 June 2018 at the National University of Singapore, Singapore www.prni.org - @PRNIworkshop - www.facebook.com/PRNIworkshop/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The 8th International Workshop on Pattern Recognition in Neuroimaging (PRNI) will be held at the Centre for Life Sciences Auditorium, National University of Singapore, Singapore on June 12-14, 2018.Pattern recognition techniques are an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. Authors should prepare full papers with a maximum length of 4 pages (two column IEEE style) for double-blind review. The PAPER submission deadline has been extended until WEDNESDAY ,18 APRIL 2018 !, 11:59 pm SGT. Accepted manuscripts will be assigned either to an oral or poster sessions. All accepted manuscripts will be included in the IEEE Xplore digital library. Similarly to previous years, in addition to full length papers PRNI will also accept short abstracts (word count not including the title, author list, tables, captions, or references) for poster presentation. Closing date: 04-MAY-2018 Open call for tutorial proposals: Finally, this year PRNI has an open call for tutorial proposals. A tutorial can take a form of 2h, 4h or whole day event aimed at demonstrating a computational technique, software tool, or specific concept. Tutorial proposals featuring hands on demonstrations and promoting diversity (e.g. gender, background, institution) will be preferred. PRNI will cover conference registration fees for up to two tutors per accepted program. The TUTORIAL submission deadline has been extended until WEDNESDAY ,18 APRIL 2018 !, 11:59 pm SGT Please see www.prni.org and follow @PRNIworkshop and www.facebook.com/PRNIworkshop/ for news and details. ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryuvaraj at ntu.edu.sg Sat Apr 7 09:47:30 2018 From: ryuvaraj at ntu.edu.sg (Yuvaraj Rajamanickam (Dr)) Date: Sat, 7 Apr 2018 13:47:30 +0000 Subject: [Neuroimaging] PRNI 2018: THE SUBMISSION DEADLINE HAS BEEN EXTENDED TO 18th APRIL, 2018 Message-ID: <3E9B0165C01BA047A1AFFBA5B9161C415E37EE3B@EXCHMBOX34.staff.main.ntu.edu.sg> PRNI 2018: THE SUBMISSION DEADLINE HAS BEEN EXTENDED UPTO 18th APRIL, 2018 ******* please accept our apologies for cross-posting ******* -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CALL FOR PAPERS AND TUTORIALS PRNI 2018: 8th International Workshop on Pattern Recognition in Neuroimaging to be held 12-14 June 2018 at the National University of Singapore, Singapore www.prni.org - @PRNIworkshop - www.facebook.com/PRNIworkshop/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The 8th International Workshop on Pattern Recognition in Neuroimaging (PRNI) will be held at the Centre for Life Sciences Auditorium, National University of Singapore, Singapore on June 12-14, 2018.Pattern recognition techniques are an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. Authors should prepare full papers with a maximum length of 4 pages (two column IEEE style) for double-blind review. The PAPER submission deadline has been extended until WEDNESDAY, 18th APRIL 2018, 11:59 pm SGT. Accepted manuscripts will be assigned either to an oral or poster sessions. All accepted manuscripts will be included in the IEEE Xplore digital library. Similarly to previous years, in addition to full length papers PRNI will also accept short abstracts (word count not including the title, author list, tables, captions, or references) for poster presentation. Closing date: 04-MAY-2018 Open call for tutorial proposals: Finally, this year PRNI has an open call for tutorial proposals. A tutorial can take a form of 2h, 4h or whole day event aimed at demonstrating a computational technique, software tool, or specific concept. Tutorial proposals featuring hands on demonstrations and promoting diversity (e.g. gender, background, institution) will be preferred. PRNI will cover conference registration fees for up to two tutors per accepted program. The TUTORIAL submission deadline has been extended until WEDNESDAY, 18th APRIL 2018, 11:59 pm SGT Please see www.prni.org and follow @PRNIworkshop and www.facebook.com/PRNIworkshop/ for news and details. ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ludob60 at gmail.com Mon Apr 9 05:10:08 2018 From: ludob60 at gmail.com (ludovico coletta) Date: Mon, 9 Apr 2018 11:10:08 +0200 Subject: [Neuroimaging] Funded PhD @IIT/CIMeC (Rovereto) Message-ID: Dear Python users, The Functional Neuroimaging lab at the Istituto Italiano di Tecnologia ( https://www.iit.it/research/lines/functional-neuroimaging), Rovereto (Italy), invites applications for one PhD scholarship to investigate the dynamics of functional connectivity under resting-conditions, and upon cell-type selective neurostimulation. The successful candidate will have a MSc in neuroscience, biotechnology, computer science, physics, or equivalent. Proficiency in image processing and analysis (Matlab, R, Python), and/or in vivo electrophysiology is highly recommended. This four-year studentship aims to provide the student with a thorough training in conducting research at the interface of biomedical imaging, computational image analysis, and experimental neuroscience. The studentship is part of the international doctoral school in cognitive and brain sciences, in partnership with the University of Trento (http://web.unitn.it/en/cimec/). Final admission to the doctoral school entails a competitive selection process, as per the school regulations ( http://web.unitn.it/en/drcimec/10140/admission-doctoral-school-cognitive-and-brain-sciences ). The Istituto Italiano di Tecnologia (IIT) is a private law Foundation, created with the objective of promoting Italy's technological development and higher education in science and technology. Research at IIT is interdisciplinary and addresses basic and applied science through the development of novel technical applications. The Functional Neuroimaging lab is located a the Center for Neuroscience and Cognitive Sciences (CNCS) @UNITN in Rovereto, Italy, one of the research nodes set up by IIT. The CNCS is an interdisciplinary research center dedicated to the investigation of the brain at multiple scales. Please send your application (full CV, two academic referees, copy of master degree thesis, statement of research interest) by email to alessandro.gozzi at iit.it *no later than May 26th, 2018*. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fick.rutger at gmail.com Wed Apr 11 04:27:26 2018 From: fick.rutger at gmail.com (Rutger Fick) Date: Wed, 11 Apr 2018 10:27:26 +0200 Subject: [Neuroimaging] [DIPY] propagator anisotropy estimation using MAP(L)MRI In-Reply-To: References: Message-ID: Hi Ping, Clearly the plain GCV is not regularizing sufficiently in the very anisotropic areas (e.g. corpus callosum). It looks like fixing the regularization weight to 0.2 (PA_laplacian_weighted0.2 map) is sufficient to fix this problem. Be sure to also include positivity. Since fixing the weight is also the fastest approach, I suggest you proceed to fit both your populations with this approach and see if you are able to answer your research questions like this. Otherwise, your approach of running GCV with a minimum weight will also work, but you'll have to find what minimum weight threshold works for your subjects. Other suggestions: - Denoising your data and correcting for the rician noise bias is good practice. The current available state-of-the-art MP-PCA approach that I know of is in MRtrix: http://mrtrix.readthedocs.io/en/latest/dwi_preprocessing/denoising.html - If you don't like using MRtrix for some reason, Dipy also has other denoising approaches: http://nipy.org/dipy/examples_built/denoise_localpca.html Finally, if you're interested in trying other microstructure estimation methods on your data, I suggest you also take a look at our recently released "diffusion microstructure imaging in python" (dmipy) package: https://github.com/AthenaEPI/dmipy/ Using dmipy, you can design and fit basically any diffusion microstructure model in literature to your data in a few lines of code. I suggest you try for example Kaden et al.'s recent Multi-Compartment Microscopic Diffusion Imaging with your data, see example , which is very fast to fit as a quick experiment. Kind regards and let me know how it goes, Rutger On 29 March 2018 at 18:02, Ping-Hong Yeh wrote: > Hi Rutger, > > We have some bad PA maps created using default settings, and I would like > to hear your opinions on improving the fitting. > > Attached are the screenshots of PA_GCV, norm_laplacian, L_opt and > PA_laplacian_weighted0.2 maps. > I am currently running the fitting using 0.05 for the minimum bound of > the GCV, but I am not sure if that would help. > > In order to do comparisons between controls and disease population, we > need to make sure that the same fitting parameters are applied for the > MAPMRI fitting for avoiding any biases. Do you have suggestions regarding > this matter? > > Thank you. > > Ping > > On Tue, Jan 23, 2018 at 7:42 AM, Rutger Fick > wrote: > >> Hi Ping, >> >> Salt and pepper noise is not a good sign (I just didn't see it so much >> on the second set of slices you sent). To spot badly estimated voxels is >> typically pretty easy - RTOP and many others can have negative or huge >> values, which typically come from oscillations in the signs extrapolation. >> You can often see these as bright spots in the laplacian norm. >> >> If you go through the data and see that salt and pepper noise corresponds >> to unusually high norms, Increasing the laplacian minimum weight in the >> code as i told you wil usually resolve it (or fixing it to a value like >> 0.05 or 0.1 or something, see what works without overdoing it). >> >> Best, >> Rutger >> >> >> >> >> On 23 Jan 2018 03:06, "Ping-Hong Yeh" wrote: >> >> Hi Rutger, >> >> Thank you very much for the detailed reply. >> >> I guess i do not need to worry about those salt-pepper dots? >> >> Would you recommend output laplacian norm and laplacian_weighted maps >> and go through the images for each data set? Any tips for realizing something >> really goes wrong when looking at the propagator anisotropy map? >> >> Best, >> >> Ping >> >> >> On Jan 22, 2018 6:55 PM, "Rutger Fick" wrote: >> >>> Hi Ping, >>> >>> In my experience, badly estimated voxels typically have super high >>> laplacian norm and very low estimated laplacian weight (lopt). >>> Looking at these results I would say things actually look pretty good! >>> >>> Getting the best results is always tricky finding a balance of optimally >>> regularizing: not fitting the noise but also not over-regularizing, which >>> is why the GCV option is nice. >>> But, in rare cases it does mess up. So, if you want to give the GCV a >>> bit less freedom to go low (to be on the safe side) you can increase the >>> minimum bound of the GCV optimization in line 2272 of the code. >>> >>> There's many ways to speed up the code I gave you if you want to put in >>> the effort ;-) Using parallel processing is not standardly implemented in >>> dipy, but maybe you can hack it somehow. >>> You can also set the laplacian_weight = 0.1 or something to avoid GCV, >>> but it won't make a huge difference. I only ever used this code to do >>> research - so speed was not really a concern. >>> >>> Anyway, hope this all helped! Let me know if everything works out, >>> >>> Best, >>> Rutger >>> >>> On 19 January 2018 at 22:03, Ping-Hong Yeh >>> wrote: >>> >>>> Hi Rutger, >>>> >>>> Attached please find the snapshot of norm_of_laplacian_signal, lopt, >>>> and pa maps of the same data set i used earlier. >>>> >>>> BTW, is there a way to speed up the mapmri_pa processing? Will the >>>> OpenMP help? >>>> >>>> Thank you, >>>> >>>> ping >>>> >>>> On Fri, Jan 19, 2018 at 1:25 PM, Rutger Fick >>>> wrote: >>>> >>>>> Hi Ping, >>>>> >>>>> So far, so good. >>>>> In my opinion the TORTOISE PA reconstruction looks a bit >>>>> flat/overregularized - but then again I don't know what kind of >>>>> regularization they implemented for themselves. >>>>> The PA of the implementation I gave you seems to give more consistent >>>>> contrast for different tissue configurations - which is a good - but looks >>>>> like it under-regularizes in some individual voxels (the salt-pepper noise >>>>> in the PA/RTOP). >>>>> >>>>> To check if this is the case, can you show me the >>>>> mapfit_L.norm_of_laplacian_signal() and mapfit_L.lopt maps? >>>>> >>>>> Rutger >>>>> >>>>> >>>>> >>>>> >>>>> On 19 January 2018 at 17:43, Ping-Hong Yeh >>>>> wrote: >>>>> >>>>>> Hi Rutger, >>>>>> >>>>>> Just give you an update of the results (see the attached snapshots) >>>>>> using GCV weighted and Laplacian regularization for MAPMRI >>>>>> estimation. >>>>>> >>>>>> The other PA mapping was calculated using TORTOISE. I have also >>>>>> attached RTOP mapping calculated using DIPY with and without GCV >>>>>> weighted and Laplacian regularization. >>>>>> >>>>>> Comparing to the TORTOISE, PA values in the one using GCV weighted >>>>>> and Laplacian regularization method are relatively smaller, >>>>>> particularly over the regions with the less dense white matter. >>>>>> >>>>>> For RTOP images, I am not sure whether GCV weighted and Laplacian >>>>>> regularization method outperforms the one without using GCV weighted >>>>>> and Laplacian regularization. >>>>>> >>>>>> Any comments? >>>>>> Thank you, >>>>>> >>>>>> ping >>>>>> >>>>>> On Wed, Jan 17, 2018 at 7:48 PM, Rutger Fick >>>>>> wrote: >>>>>> >>>>>>> Hi Ping, >>>>>>> >>>>>>> If it's still running and gave only that error then probably it was >>>>>>> just a single voxel that failed and the rest is working. However, I >>>>>>> recommend you first try to fit a smaller dataset (just a few voxels or a >>>>>>> single slice) just to check the results make sense. >>>>>>> >>>>>>> I should mention that the code I gave you is slower than Dipy's >>>>>>> public version for reasons I won't get into, so don't worry if you have to >>>>>>> wait longer than before. >>>>>>> >>>>>>> Best, >>>>>>> Rutger >>>>>>> >>>>>>> On 18 Jan 2018 00:58, "Ping-Hong Yeh" wrote: >>>>>>> >>>>>>>> Hi Rutger, >>>>>>>> >>>>>>>> Thanks again for the prompt reply. >>>>>>>> >>>>>>>> Adding "mask" to mapmri have fixed the error; however, another >>>>>>>> error shows up, >>>>>>>> >>>>>>>> mapfit_L = map_model_L.fit(data,mask=data[..., 0]>0) >>>>>>>> dipy/core/geometry.py:129: RuntimeWarning: invalid value >>>>>>>> encountered in true_divide >>>>>>>> theta = np.arccos(z / r) >>>>>>>> dipy/reconst/mapmri_pa.py:364: UserWarning: The MAPMRI positivity >>>>>>>> constraint depends on CVXOPT (http://cvxopt.org/). CVXOPT is >>>>>>>> licensed under the GPL (see: http://cvxopt.org/copyright.html) and >>>>>>>> you may be subject to this license when using the positivity constraint. >>>>>>>> warn(w_s) >>>>>>>> dipy/reconst/mapmri_pa.py:413: UserWarning: Optimization did not >>>>>>>> find a solution >>>>>>>> warn('Optimization did not find a solution') >>>>>>>> Error: Couldn't find per display information >>>>>>>> >>>>>>>> >>>>>>>> It is still running though. Should i stop the running? >>>>>>>> >>>>>>>> Thank you. >>>>>>>> ping >>>>>>>> >>>>>>>> On Tue, Jan 16, 2018 at 7:18 PM, Rutger Fick >>>>>>> > wrote: >>>>>>>> >>>>>>>>> Hi Ping, >>>>>>>>> >>>>>>>>> Reading the error messages, it looks like you're fitting a masked >>>>>>>>> voxel. The following error: >>>>>>>>> >>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:389: >>>>>>>>> RuntimeWarning: invalid value encountered in divide >>>>>>>>> data = np.asarray(data / data[self.gtab.b0s_mask].mean()) >>>>>>>>> >>>>>>>>> says you're dividing by either zero or NaN, which means your b0 >>>>>>>>> value of that voxel was zero (or you had no b0 values possibly). Note that >>>>>>>>> mapmri needs at least one b0 measurement. >>>>>>>>> I recommend you check if it works when you fit a voxel that you >>>>>>>>> know for sure is in white matter. If it works, you can do something like >>>>>>>>> map_model_L.fit(data, mask=data[..., 0]>0) to use a mask that >>>>>>>>> only fits if the first measured DWI is positive (assuming your first >>>>>>>>> measurement is a b0). >>>>>>>>> >>>>>>>>> Best, >>>>>>>>> Rutger >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On 16 January 2018 at 23:46, Ping-Hong Yeh >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi Rutger, >>>>>>>>>> >>>>>>>>>> I got an error running the map_model.fit using mapmri_pa. Here is >>>>>>>>>> the scripts i used, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> map_model_L = mapmri_pa.MapmriModel(gtab, >>>>>>>>>> radial_order=radial_order, >>>>>>>>>> laplacian_regularization=True, # >>>>>>>>>> this regularization enhances reproducibility of estimated q-space indices >>>>>>>>>> by imposing smoothness >>>>>>>>>> laplacian_weighting="GCV", # this >>>>>>>>>> makes it use generalized cross-validation to find the best regularization >>>>>>>>>> weight >>>>>>>>>> positivity_constraint=True) # >>>>>>>>>> this ensures the estimated PDF is positive >>>>>>>>>> >>>>>>>>>> mapfit_L = map_model_L.fit(data) >>>>>>>>>> >>>>>>>>>> , and the error message, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> /Library/Python/2.7/site-packages/dipy/core/geometry.py:129: >>>>>>>>>> RuntimeWarning: invalid value encountered in true_divide >>>>>>>>>> theta = np.arccos(z / r) >>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:364: >>>>>>>>>> UserWarning: The MAPMRI positivity constraint depends on CVXOPT (http: >>>>>>>>>> xopt.org/). CVXOPT is licensed under the GPL (see: >>>>>>>>>> http://cvxopt.org/copyright.html) and you may be subject to this >>>>>>>>>> license when using positivity constraint. >>>>>>>>>> warn(w_s) >>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:389: >>>>>>>>>> RuntimeWarning: invalid value encountered in divide >>>>>>>>>> data = np.asarray(data / data[self.gtab.b0s_mask].mean()) >>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:413: >>>>>>>>>> UserWarning: Optimization did not find a solution >>>>>>>>>> warn('Optimization did not find a solution') >>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:444: >>>>>>>>>> UserWarning: Optimization did not find a solution >>>>>>>>>> warn('Optimization did not find a solution') >>>>>>>>>> Traceback (most recent call last): >>>>>>>>>> File "", line 1, in >>>>>>>>>> File "/Library/Python/2.7/site-pack >>>>>>>>>> ages/dipy/reconst/multi_voxel.py", line 33, in new_fit >>>>>>>>>> fit_array[ijk] = single_voxel_fit(self, data[ijk]) >>>>>>>>>> File "/Library/Python/2.7/site-pack >>>>>>>>>> ages/dipy/reconst/mapmri_pa.py", line 465, in fit >>>>>>>>>> coef_iso = coef_iso / sum(coef_iso * self.Bm_iso) >>>>>>>>>> UnboundLocalError: local variable 'coef_iso' referenced before >>>>>>>>>> assignment >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Any suggestions? >>>>>>>>>> >>>>>>>>>> Thank you. >>>>>>>>>> >>>>>>>>>> ping >>>>>>>>>> >>>>>>>>>> On Fri, Jan 12, 2018 at 6:24 PM, Rutger Fick < >>>>>>>>>> fick.rutger at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Ping, >>>>>>>>>>> >>>>>>>>>>> Attached is the mapmri code that also has PA, just put it in the >>>>>>>>>>> dipy/reconst/ folder (where also the current mapmri.py file is) and run >>>>>>>>>>> "python setup.py install" from dipy's main folder. That should make it >>>>>>>>>>> usable in the same way as the current mapmri module. >>>>>>>>>>> Note that its based on an old implementation that still works >>>>>>>>>>> with the "cvxopt" optimizer package, so you'll have to install cvxopt to >>>>>>>>>>> make it run. >>>>>>>>>>> >>>>>>>>>>> I recommend you use the model with both laplacian regularization >>>>>>>>>>> and positivity constraint, this give the best results in general. >>>>>>>>>>> >>>>>>>>>>> from dipy.reconst import mapmri_pa >>>>>>>>>>> mapmod = mapmri_pa.MapmriModel(gtab, >>>>>>>>>>> laplacian_regularization=True, >>>>>>>>>>> # this regularization enhances reproducibility of estimated q-space indices >>>>>>>>>>> by imposing smoothness >>>>>>>>>>> laplacian_weighting="GCV", # >>>>>>>>>>> this makes it use generalized cross-validation to find the best >>>>>>>>>>> regularization weight >>>>>>>>>>> positivity_constraint=True) # >>>>>>>>>>> this ensures the estimated PDF is positive >>>>>>>>>>> mapfit = mapmod.fit(data) >>>>>>>>>>> pa = mapfit.pa() >>>>>>>>>>> >>>>>>>>>>> Aside from the original MAPMRI citation for Ozarslan et al. >>>>>>>>>>> (2013), note that the relevant citation for dipy's laplacian-regularized >>>>>>>>>>> MAP-MRI implementation is [1]. >>>>>>>>>>> [1] Fick, Rutger HJ, et al. "MAPL: Tissue microstructure >>>>>>>>>>> estimation using Laplacian-regularized MAP-MRI and its application to HCP >>>>>>>>>>> data." *NeuroImage* 134 (2016): 365-385. >>>>>>>>>>> >>>>>>>>>>> Hope it helps and let me know if you need anything else, >>>>>>>>>>> Rutger >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 12 January 2018 at 21:48, Ping-Hong Yeh < >>>>>>>>>>> pinghongyeh at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Roger, >>>>>>>>>>>> >>>>>>>>>>>> Thanks for the prompt reply. >>>>>>>>>>>> May I have the code for estimating PA? >>>>>>>>>>>> >>>>>>>>>>>> Ping >>>>>>>>>>>> >>>>>>>>>>>> On Jan 12, 2018 3:21 PM, "Rutger Fick" >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Ping, >>>>>>>>>>>>> >>>>>>>>>>>>> MAPL is just a name for using laplacian-regularized MAP-MRI. >>>>>>>>>>>>> If you're using the dipy mapmri implementation then you're using MAPL by >>>>>>>>>>>>> default. >>>>>>>>>>>>> From a fitted mapmri model you can estimate overall >>>>>>>>>>>>> non-gaussianity using fitted_model.ng(), and parallel and perpendicular >>>>>>>>>>>>> non-Gaussianity using ng_parallel() and ng_perpendic >>>>>>>>>>>>> perpendicularular(). >>>>>>>>>>>>> Propagator Anisotropic is not included in the current dipy >>>>>>>>>>>>> implementation. However, I do have a personal version of dipy's mapmri >>>>>>>>>>>>> implementation that includes it, if you're interested. >>>>>>>>>>>>> >>>>>>>>>>>>> Best, >>>>>>>>>>>>> Rutger >>>>>>>>>>>>> >>>>>>>>>>>>> On 12 January 2018 at 16:49, Ping-Hong Yeh < >>>>>>>>>>>>> pinghongyeh at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi DIPY users, >>>>>>>>>>>>>> >>>>>>>>>>>>>> I would like to know the way of estimating non-Gaussian and >>>>>>>>>>>>>> PA, mentioned in the Avram et al. ?Clinical feasibility of >>>>>>>>>>>>>> using mean apparent propagator (MAP) MRI to characterize brain tissue >>>>>>>>>>>>>> microstructure? paper, using MAPMRI or MAPL model. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thank you. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Ping >>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Neuroimaging mailing list >>>>>>>>>> Neuroimaging at python.org >>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Neuroimaging mailing list >>>>>>>>> Neuroimaging at python.org >>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Neuroimaging mailing list >>>>>>>> Neuroimaging at python.org >>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>> >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Neuroimaging mailing list >>>>>>> Neuroimaging at python.org >>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Neuroimaging mailing list >>>>>> Neuroimaging at python.org >>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avesani at fbk.eu Wed Apr 11 11:34:53 2018 From: avesani at fbk.eu (Paolo Avesani) Date: Wed, 11 Apr 2018 17:34:53 +0200 Subject: [Neuroimaging] [nibabel] Reading '.bshort' file format Message-ID: File format denoted by suffix ".bshort" should be a variation of ".mgh" file format. Nibabel is supporting ".mgh" file format while ".bshort" isn't. Is there any trick or workaround for loading ".bshort" files? I tried to use mri_convert to transform ".bshort" file into nifti files but it doesn't work. A reference to other solutions to convert ".bshort" files are welcome. Thanks in advance Paolo -- -- Le informazioni contenute nella presente comunicazione sono di natura? privata e come tali sono da considerarsi riservate ed indirizzate? esclusivamente ai destinatari indicati e per le finalit? strettamente? legate al relativo contenuto. Se avete ricevuto questo messaggio per? errore, vi preghiamo di eliminarlo e di inviare una comunicazione? all?indirizzo e-mail del mittente. -- The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you received this in error, please contact the sender and delete the material. -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Wed Apr 11 12:02:08 2018 From: effigies at bu.edu (Christopher Markiewicz) Date: Wed, 11 Apr 2018 12:02:08 -0400 Subject: [Neuroimaging] [nibabel] Reading '.bshort' file format In-Reply-To: References: Message-ID: Hi Paolo, Nibabel does not currently support bshort or bfloat formats. If you are unable to convert with mri_convert, I would consider the possibility that you have corrupted files. Chris Markiewicz On Wed, Apr 11, 2018 at 11:34 AM, Paolo Avesani wrote: > File format denoted by suffix ".bshort" should be a variation of ".mgh" > file format. Nibabel is supporting ".mgh" file format while ".bshort" isn't. > Is there any trick or workaround for loading ".bshort" files? > > I tried to use mri_convert to transform ".bshort" file into nifti files > but it doesn't work. > A reference to other solutions to convert ".bshort" files are welcome. > > Thanks in advance > Paolo > > > -- > Le informazioni contenute nella presente comunicazione sono di natura privata > e come tali sono da considerarsi riservate ed indirizzate esclusivamente > ai destinatari indicati e per le finalit? strettamente legate al relativo > contenuto. Se avete ricevuto questo messaggio per errore, vi preghiamo di > eliminarlo e di inviare una comunicazione all?indirizzo e-mail del > mittente. > -- > The information transmitted is intended only for the person or entity to > which it is addressed and may contain confidential and/or privileged > material. If you received this in error, please contact the sender and > delete the material. > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.cote.19 at gmail.com Wed Apr 11 20:56:54 2018 From: marc.cote.19 at gmail.com (=?UTF-8?B?TWFyYy1BbGV4YW5kcmUgQ8O0dMOp?=) Date: Wed, 11 Apr 2018 20:56:54 -0400 Subject: [Neuroimaging] dipy target streamline filtering In-Reply-To: References: <95229795-5100-4D44-8B55-5E71EDB2DBE5@mit.edu> Message-ID: ?Great! :-) ?2018-04-10?18:55?GMT-0400?Kevin R Sitek? > Thank you Marc! This worked for me. > > Kevin > >> On Apr 4, 2018, at 9:14 PM, Marc-Alexandre C?t? >> > wrote: >> >> ?Hi Kevin, >> >> Can you try using the new Nibabel's streamlines API >> (https://github.com/nipy/nibabel/tree/master/nibabel/streamlines)? A >> convenient way to use it is through the dipy.io.streamlines module >> >> https://github.com/nipy/dipy/blob/master/dipy/io/streamline.py >> >> Let me know if you still have negative coordinates. >> >> --Marc >> >> >> ?2018-04-03?14:03?GMT-0400?Kevin R Sitek >>> >>> Hello, >>> >>> I?m generating whole-sample streamlines with dipy?s CSD model. I?d >>> like to also filter streamlines by region of interest, but running >>> |dipy.tracking.utils.target|?I?m getting >>> |IndexError: streamline has points that map to negative voxel >>> indices|?in the helper function |_to_voxel_coordinates| >>> >>> [I originally posted this issue on neurostars: >>> https://neurostars.org/t/filtering-streamlines-with-dipy-target/1493] >>> >>> Since I?m using the same affine as the DWI data the streamlines were >>> generated from, and since the target mask looks fine relative to the >>> diffusion image, I?m wondering how the negative voxel indices could >>> arise. >>> >>> I am running |target|?like this: >>> >>> |from dipy.tracking.utils import target # read in streamlines from >>> nibabel import trackvis as tv streams_in_orig, hdr = >>> tv.read(streamlines) streams_in = list(streams_in_orig) # streams_in >>> is [[array, None, None], ...] streams = [] for s in streams_in: >>> streams.append(s[0]) target_mask_bool = >>> np.array(target_mask.get_data(), dtype=bool, copy=True) >>> target_sl_generator = target(streams, target_mask_bool, affine, >>> include=True) target_streamlines = list(target_sl_generator) | >>> >>> where the streamlines |.trk|?file had been generated as below: >>> >>> |eu = EuDX(peaks.gfa, peaks.peak_indices[..., 0], odf_vertices = >>> sphere.vertices, seeds=10**6, ang_thr=45) streamlines = ((sl, None, >>> None) for sl in eu) hdr = nib.trackvis.empty_header() >>> hdr['voxel_size'] = fa_img.get_header().get_zooms()[:3] >>> hdr['voxel_order'] = 'LAS' hdr['dim'] = FA.shape[:3] sl_fname = >>> os.path.abspath('streamline.trk') nib.trackvis.write(sl_fname, >>> streamlines, hdr, points_space='voxel') | >>> >>> I have the same issue when |hdr['voxel_order'] = 'RAS'|?and when >>> |points_space=None|. >>> >>> Any pointers would be appreciated! >>> >>> Kevin >>> >>> >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Sun Apr 8 22:29:05 2018 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Mon, 09 Apr 2018 02:29:05 +0000 Subject: [Neuroimaging] Brainhack Global Indiana University - May 2 - 4 - Registration is opening in 2 days! Message-ID: Hello all, *Brainhack Global @ IU is back! * *https://brainhack.sice.indiana.edu * *As you can see in the link above, we have a crazy lineup of keynote speakers including Carlo Pierpaoli from the NIH!!! * This year's Brainhack is a bit different than last year's as we will run *unique hands on tutorials* in parallel with the *social coding sessions*. *Bring your notebooks!* The event will take place in a fantastic new building at Indiana University called Luddy Hall that includes a *sentient environment, VR *and* 3D Wall*!!! Registration is opening in 2 days. Please *forward* to interested parties. :) See you @ the Hack!, Eleftherios Garyfallidis, PhD Assistant Professor Intelligent Systems Engineering Indiana University Luddy Hall 700 N Woodlawn Bloomington, IN 47408 https://grg.sice.indiana.edu http://dipy.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Sat Apr 14 10:00:47 2018 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Sat, 14 Apr 2018 14:00:47 +0000 Subject: [Neuroimaging] Brainhack Global Indiana University - May 2 - 4 - Registration is open! Message-ID: *Hello all,* *Registration is now open! * *https://brainhack.sice.indiana.edu * *As you can see in the link above, we have a crazy lineup of keynote speakers including Carlo Pierpaoli from the NIH!!! * This year's Brainhack is a bit different than last year's as we will run *unique hands on tutorials* in parallel with the *social coding sessions*. I will be running a completely new tutorial for performing *high quality automated tractography segmentation* using DIPY! Check also the other awesome tutorials in the link above. Most of these are not available in any other event this year. The Brainhack will take place in a fantastic super new building at Indiana University called *Luddy Hall* that includes a *sentient environment, VR * and* 3D Wall*!!! *See you @ the Hack! The Wifi/Force is Strong there! Bring your notebooks and data! :D* Please *forward* to *all* interested parties!!! Best regards, *Eleftherios Garyfallidis, PhD* Assistant Professor *Intelligent Systems Engineering* *Indiana University* Luddy Hall 700 N Woodlawn Bloomington, IN 47408 *https://grg.sice.indiana.edu http://dipy.org * -------------- next part -------------- An HTML attachment was scrubbed... URL: From andersonwinkler at gmail.com Tue Apr 17 06:52:30 2018 From: andersonwinkler at gmail.com (Anderson M. Winkler) Date: Tue, 17 Apr 2018 06:52:30 -0400 Subject: [Neuroimaging] Postdoctoral position at the NIH/NIMH Message-ID: We are seeking enthusiastic applicants for a Post-Doctoral Fellowship position to help with the collection and analysis of large brain-imaging datasets. The successful candidate will use state-of-the-art artificial intelligence methods, with the aim of better understanding psychiatric disorders in young people with mental illness, particularly anxiety and depression. Our goal is to understand better the causes and mechanisms of certain psychiatric disorders, improve their definition and classification, and ensure the best treatment can be offered to psychiatric patients. The successful candidate will develop and apply deep learning algorithms to multi-modal imaging datasets that include MRI (functional, structural), EEG, MEG, and associated behavioral and clinical data. The methods developed by the successful candidate will be used to: - Integrate these diverse sources of information. - Inform the construction computational models in psychiatry. - Test the validity of such models. Candidates with a strong computational background (e.g. PhD in Engineering, Physics, Computer Science, Mathematics, Statistics, Computational Neuroscience, and related areas) who are interested in brain development and psychopathology, are particularly encouraged to apply. Requirements for this position include: - Strong machine learning experience; - Programming experience in Python (preferably), or in R/Matlab/Octave; - Experience with open source machine learning libraries such as Scikit-learn, Theano, and/or Tensorflow; - Excellent interpersonal and written (English) communication skills. Background experience in psychiatry or knowledge of neuroimaging software are not required. However, the candidate will be expected to learn some of these topics as part of their role in our research group. The successful candidate will work jointly with the laboratories of Drs Daniel Pine and Argyris Stringaris, and together with Dr Anderson Winkler, Staff Scientist. Please write to Drs Pine (pined at mail.nih.gov), Stringaris ( argyris.stringaris at nih.gov) or Winkler (anderson.winkler at nih.gov) with your application and CV. -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryava.bhattacharya at kcl.ac.uk Wed Apr 18 06:03:11 2018 From: suryava.bhattacharya at kcl.ac.uk (Bhattacharya, Suryava) Date: Wed, 18 Apr 2018 10:03:11 +0000 Subject: [Neuroimaging] Writing a Gifti Label file Message-ID: Dear Neuroimaging at python, The Connectome Workbench is throwing up this error: ERROR: Parse error while reading: Decompression of Binary data failed. Uncompressed 0 bytes but should be 129968 bytes., line number: 17 column number: 9935 This occurs when I try to use the wb_command -label-resample to resample label files that I have created. I write the label files using the nibabel module in Python, where I import an array of int32 values and I load a .label gifti image and copy it two a new gifti image variable. On this new gifti image variable, I replace the darrays[0].data (which is int32) with the new label data array (which is also int32). Then I output using nibabel.gifti.giftiio.write to a .label.gii file. I have attached my code and an example label.gii file. Any suggestions of how to fix this would be appreciated, thanks Thanks and Regards, Suryava -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Parcellation.py Type: application/octet-stream Size: 1957 bytes Desc: Parcellation.py URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 40.R.label_1_for_40_regions_drawem.label.gii Type: application/octet-stream Size: 14236 bytes Desc: 40.R.label_1_for_40_regions_drawem.label.gii URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 40.L.label_1_for_40_regions_drawem.label.gii Type: application/octet-stream Size: 13454 bytes Desc: 40.L.label_1_for_40_regions_drawem.label.gii URL: From pinghongyeh at gmail.com Fri Apr 20 10:52:00 2018 From: pinghongyeh at gmail.com (Ping-Hong Yeh) Date: Fri, 20 Apr 2018 10:52:00 -0400 Subject: [Neuroimaging] [DIPY] propagator anisotropy estimation using MAP(L)MRI In-Reply-To: References: Message-ID: Hi Rutger, Thank you for the details. I've just tried the dmipy but hit the ditch for converting bvecs to scheme file using gtab_dipy2mipy. Here is the error message, acq_scheme_mipy = gtab_dipy2mipy(gtab) Traceback (most recent call last): File "", line 1, in File "/Users/yehp/Downloads/dmipy-master/dmipy/core/acquisition_scheme.py", line 714, in gtab_dipy2mipy bvalues=bvals, gradient_directions=bvecs, delta=delta, Delta=Delta) File "/Users/yehp/Downloads/dmipy-master/dmipy/core/acquisition_scheme.py", line 437, in acquisition_scheme_from_bvalues check_acquisition_scheme(bvalues, gradient_directions, delta_, Delta_, TE_) File "/Users/yehp/Downloads/dmipy-master/dmipy/core/acquisition_scheme.py", line 696, in check_acquisition_scheme raise ValueError(msg) ValueError: gradient orientations n are not unit vectors. The gtab_dipy2mipy does not like the bvecs i have; it appears that there are some decimals off. I have applied the norm beforehand using the matlab. Please find the attached bvecs table. Ping On Wed, Apr 11, 2018 at 4:27 AM, Rutger Fick wrote: > Hi Ping, > > Clearly the plain GCV is not regularizing sufficiently in the very > anisotropic areas (e.g. corpus callosum). > It looks like fixing the regularization weight to 0.2 > (PA_laplacian_weighted0.2 map) is sufficient to fix this problem. Be sure > to also include positivity. > Since fixing the weight is also the fastest approach, I suggest you > proceed to fit both your populations with this approach and see if you are > able to answer your research questions like this. > Otherwise, your approach of running GCV with a minimum weight will also > work, but you'll have to find what minimum weight threshold works for your > subjects. > > Other suggestions: > - Denoising your data and correcting for the rician noise bias is good > practice. The current available state-of-the-art MP-PCA approach that I > know of is in MRtrix: http://mrtrix.readthedocs.io/ > en/latest/dwi_preprocessing/denoising.html > - If you don't like using MRtrix for some reason, Dipy also has other > denoising approaches: http://nipy.org/dipy/examples_ > built/denoise_localpca.html > > Finally, if you're interested in trying other microstructure estimation > methods on your data, I suggest you also take a look at our recently > released "diffusion microstructure imaging in python" (dmipy) package: > https://github.com/AthenaEPI/dmipy/ > Using dmipy, you can design and fit basically any diffusion microstructure > model in literature to your data in a few lines of code. > I suggest you try for example Kaden et al.'s recent Multi-Compartment > Microscopic Diffusion Imaging with your data, see example > , > which is very fast to fit as a quick experiment. > > Kind regards and let me know how it goes, > Rutger > > > On 29 March 2018 at 18:02, Ping-Hong Yeh wrote: > >> Hi Rutger, >> >> We have some bad PA maps created using default settings, and I would like >> to hear your opinions on improving the fitting. >> >> Attached are the screenshots of PA_GCV, norm_laplacian, L_opt and >> PA_laplacian_weighted0.2 maps. >> I am currently running the fitting using 0.05 for the minimum bound of >> the GCV, but I am not sure if that would help. >> >> In order to do comparisons between controls and disease population, we >> need to make sure that the same fitting parameters are applied for the >> MAPMRI fitting for avoiding any biases. Do you have suggestions regarding >> this matter? >> >> Thank you. >> >> Ping >> >> On Tue, Jan 23, 2018 at 7:42 AM, Rutger Fick >> wrote: >> >>> Hi Ping, >>> >>> Salt and pepper noise is not a good sign (I just didn't see it so much >>> on the second set of slices you sent). To spot badly estimated voxels is >>> typically pretty easy - RTOP and many others can have negative or huge >>> values, which typically come from oscillations in the signs extrapolation. >>> You can often see these as bright spots in the laplacian norm. >>> >>> If you go through the data and see that salt and pepper noise >>> corresponds to unusually high norms, Increasing the laplacian minimum >>> weight in the code as i told you wil usually resolve it (or fixing it to a >>> value like 0.05 or 0.1 or something, see what works without overdoing it). >>> >>> Best, >>> Rutger >>> >>> >>> >>> >>> On 23 Jan 2018 03:06, "Ping-Hong Yeh" wrote: >>> >>> Hi Rutger, >>> >>> Thank you very much for the detailed reply. >>> >>> I guess i do not need to worry about those salt-pepper dots? >>> >>> Would you recommend output laplacian norm and laplacian_weighted maps >>> and go through the images for each data set? Any tips for realizing something >>> really goes wrong when looking at the propagator anisotropy map? >>> >>> Best, >>> >>> Ping >>> >>> >>> On Jan 22, 2018 6:55 PM, "Rutger Fick" wrote: >>> >>>> Hi Ping, >>>> >>>> In my experience, badly estimated voxels typically have super high >>>> laplacian norm and very low estimated laplacian weight (lopt). >>>> Looking at these results I would say things actually look pretty good! >>>> >>>> Getting the best results is always tricky finding a balance of >>>> optimally regularizing: not fitting the noise but also not >>>> over-regularizing, which is why the GCV option is nice. >>>> But, in rare cases it does mess up. So, if you want to give the GCV a >>>> bit less freedom to go low (to be on the safe side) you can increase the >>>> minimum bound of the GCV optimization in line 2272 of the code. >>>> >>>> There's many ways to speed up the code I gave you if you want to put in >>>> the effort ;-) Using parallel processing is not standardly implemented in >>>> dipy, but maybe you can hack it somehow. >>>> You can also set the laplacian_weight = 0.1 or something to avoid GCV, >>>> but it won't make a huge difference. I only ever used this code to do >>>> research - so speed was not really a concern. >>>> >>>> Anyway, hope this all helped! Let me know if everything works out, >>>> >>>> Best, >>>> Rutger >>>> >>>> On 19 January 2018 at 22:03, Ping-Hong Yeh >>>> wrote: >>>> >>>>> Hi Rutger, >>>>> >>>>> Attached please find the snapshot of norm_of_laplacian_signal, lopt, >>>>> and pa maps of the same data set i used earlier. >>>>> >>>>> BTW, is there a way to speed up the mapmri_pa processing? Will the >>>>> OpenMP help? >>>>> >>>>> Thank you, >>>>> >>>>> ping >>>>> >>>>> On Fri, Jan 19, 2018 at 1:25 PM, Rutger Fick >>>>> wrote: >>>>> >>>>>> Hi Ping, >>>>>> >>>>>> So far, so good. >>>>>> In my opinion the TORTOISE PA reconstruction looks a bit >>>>>> flat/overregularized - but then again I don't know what kind of >>>>>> regularization they implemented for themselves. >>>>>> The PA of the implementation I gave you seems to give more consistent >>>>>> contrast for different tissue configurations - which is a good - but looks >>>>>> like it under-regularizes in some individual voxels (the salt-pepper noise >>>>>> in the PA/RTOP). >>>>>> >>>>>> To check if this is the case, can you show me the >>>>>> mapfit_L.norm_of_laplacian_signal() and mapfit_L.lopt maps? >>>>>> >>>>>> Rutger >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On 19 January 2018 at 17:43, Ping-Hong Yeh >>>>>> wrote: >>>>>> >>>>>>> Hi Rutger, >>>>>>> >>>>>>> Just give you an update of the results (see the attached snapshots) >>>>>>> using GCV weighted and Laplacian regularization for MAPMRI >>>>>>> estimation. >>>>>>> >>>>>>> The other PA mapping was calculated using TORTOISE. I have also >>>>>>> attached RTOP mapping calculated using DIPY with and without GCV >>>>>>> weighted and Laplacian regularization. >>>>>>> >>>>>>> Comparing to the TORTOISE, PA values in the one using GCV weighted >>>>>>> and Laplacian regularization method are relatively smaller, >>>>>>> particularly over the regions with the less dense white matter. >>>>>>> >>>>>>> For RTOP images, I am not sure whether GCV weighted and Laplacian >>>>>>> regularization method outperforms the one without using GCV >>>>>>> weighted and Laplacian regularization. >>>>>>> >>>>>>> Any comments? >>>>>>> Thank you, >>>>>>> >>>>>>> ping >>>>>>> >>>>>>> On Wed, Jan 17, 2018 at 7:48 PM, Rutger Fick >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Ping, >>>>>>>> >>>>>>>> If it's still running and gave only that error then probably it was >>>>>>>> just a single voxel that failed and the rest is working. However, I >>>>>>>> recommend you first try to fit a smaller dataset (just a few voxels or a >>>>>>>> single slice) just to check the results make sense. >>>>>>>> >>>>>>>> I should mention that the code I gave you is slower than Dipy's >>>>>>>> public version for reasons I won't get into, so don't worry if you have to >>>>>>>> wait longer than before. >>>>>>>> >>>>>>>> Best, >>>>>>>> Rutger >>>>>>>> >>>>>>>> On 18 Jan 2018 00:58, "Ping-Hong Yeh" >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Rutger, >>>>>>>>> >>>>>>>>> Thanks again for the prompt reply. >>>>>>>>> >>>>>>>>> Adding "mask" to mapmri have fixed the error; however, another >>>>>>>>> error shows up, >>>>>>>>> >>>>>>>>> mapfit_L = map_model_L.fit(data,mask=data[..., 0]>0) >>>>>>>>> dipy/core/geometry.py:129: RuntimeWarning: invalid value >>>>>>>>> encountered in true_divide >>>>>>>>> theta = np.arccos(z / r) >>>>>>>>> dipy/reconst/mapmri_pa.py:364: UserWarning: The MAPMRI positivity >>>>>>>>> constraint depends on CVXOPT (http://cvxopt.org/). CVXOPT is >>>>>>>>> licensed under the GPL (see: http://cvxopt.org/copyright.html) >>>>>>>>> and you may be subject to this license when using the positivity constraint. >>>>>>>>> warn(w_s) >>>>>>>>> dipy/reconst/mapmri_pa.py:413: UserWarning: Optimization did not >>>>>>>>> find a solution >>>>>>>>> warn('Optimization did not find a solution') >>>>>>>>> Error: Couldn't find per display information >>>>>>>>> >>>>>>>>> >>>>>>>>> It is still running though. Should i stop the running? >>>>>>>>> >>>>>>>>> Thank you. >>>>>>>>> ping >>>>>>>>> >>>>>>>>> On Tue, Jan 16, 2018 at 7:18 PM, Rutger Fick < >>>>>>>>> fick.rutger at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Ping, >>>>>>>>>> >>>>>>>>>> Reading the error messages, it looks like you're fitting a masked >>>>>>>>>> voxel. The following error: >>>>>>>>>> >>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:389: >>>>>>>>>> RuntimeWarning: invalid value encountered in divide >>>>>>>>>> data = np.asarray(data / data[self.gtab.b0s_mask].mean()) >>>>>>>>>> >>>>>>>>>> says you're dividing by either zero or NaN, which means your b0 >>>>>>>>>> value of that voxel was zero (or you had no b0 values possibly). Note that >>>>>>>>>> mapmri needs at least one b0 measurement. >>>>>>>>>> I recommend you check if it works when you fit a voxel that you >>>>>>>>>> know for sure is in white matter. If it works, you can do something like >>>>>>>>>> map_model_L.fit(data, mask=data[..., 0]>0) to use a mask that >>>>>>>>>> only fits if the first measured DWI is positive (assuming your first >>>>>>>>>> measurement is a b0). >>>>>>>>>> >>>>>>>>>> Best, >>>>>>>>>> Rutger >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 16 January 2018 at 23:46, Ping-Hong Yeh >>>>>>>>> > wrote: >>>>>>>>>> >>>>>>>>>>> Hi Rutger, >>>>>>>>>>> >>>>>>>>>>> I got an error running the map_model.fit using mapmri_pa. Here >>>>>>>>>>> is the scripts i used, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> map_model_L = mapmri_pa.MapmriModel(gtab, >>>>>>>>>>> radial_order=radial_order, >>>>>>>>>>> laplacian_regularization=True, >>>>>>>>>>> # this regularization enhances reproducibility of estimated q-space indices >>>>>>>>>>> by imposing smoothness >>>>>>>>>>> laplacian_weighting="GCV", # >>>>>>>>>>> this makes it use generalized cross-validation to find the best >>>>>>>>>>> regularization weight >>>>>>>>>>> positivity_constraint=True) # >>>>>>>>>>> this ensures the estimated PDF is positive >>>>>>>>>>> >>>>>>>>>>> mapfit_L = map_model_L.fit(data) >>>>>>>>>>> >>>>>>>>>>> , and the error message, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> /Library/Python/2.7/site-packages/dipy/core/geometry.py:129: >>>>>>>>>>> RuntimeWarning: invalid value encountered in true_divide >>>>>>>>>>> theta = np.arccos(z / r) >>>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:364: >>>>>>>>>>> UserWarning: The MAPMRI positivity constraint depends on CVXOPT (http: >>>>>>>>>>> xopt.org/). CVXOPT is licensed under the GPL (see: >>>>>>>>>>> http://cvxopt.org/copyright.html) and you may be subject to >>>>>>>>>>> this license when using positivity constraint. >>>>>>>>>>> warn(w_s) >>>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:389: >>>>>>>>>>> RuntimeWarning: invalid value encountered in divide >>>>>>>>>>> data = np.asarray(data / data[self.gtab.b0s_mask].mean()) >>>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:413: >>>>>>>>>>> UserWarning: Optimization did not find a solution >>>>>>>>>>> warn('Optimization did not find a solution') >>>>>>>>>>> /Library/Python/2.7/site-packages/dipy/reconst/mapmri_pa.py:444: >>>>>>>>>>> UserWarning: Optimization did not find a solution >>>>>>>>>>> warn('Optimization did not find a solution') >>>>>>>>>>> Traceback (most recent call last): >>>>>>>>>>> File "", line 1, in >>>>>>>>>>> File "/Library/Python/2.7/site-pack >>>>>>>>>>> ages/dipy/reconst/multi_voxel.py", line 33, in new_fit >>>>>>>>>>> fit_array[ijk] = single_voxel_fit(self, data[ijk]) >>>>>>>>>>> File "/Library/Python/2.7/site-pack >>>>>>>>>>> ages/dipy/reconst/mapmri_pa.py", line 465, in fit >>>>>>>>>>> coef_iso = coef_iso / sum(coef_iso * self.Bm_iso) >>>>>>>>>>> UnboundLocalError: local variable 'coef_iso' referenced before >>>>>>>>>>> assignment >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Any suggestions? >>>>>>>>>>> >>>>>>>>>>> Thank you. >>>>>>>>>>> >>>>>>>>>>> ping >>>>>>>>>>> >>>>>>>>>>> On Fri, Jan 12, 2018 at 6:24 PM, Rutger Fick < >>>>>>>>>>> fick.rutger at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Ping, >>>>>>>>>>>> >>>>>>>>>>>> Attached is the mapmri code that also has PA, just put it in >>>>>>>>>>>> the dipy/reconst/ folder (where also the current mapmri.py file is) and run >>>>>>>>>>>> "python setup.py install" from dipy's main folder. That should make it >>>>>>>>>>>> usable in the same way as the current mapmri module. >>>>>>>>>>>> Note that its based on an old implementation that still works >>>>>>>>>>>> with the "cvxopt" optimizer package, so you'll have to install cvxopt to >>>>>>>>>>>> make it run. >>>>>>>>>>>> >>>>>>>>>>>> I recommend you use the model with both laplacian >>>>>>>>>>>> regularization and positivity constraint, this give the best results in >>>>>>>>>>>> general. >>>>>>>>>>>> >>>>>>>>>>>> from dipy.reconst import mapmri_pa >>>>>>>>>>>> mapmod = mapmri_pa.MapmriModel(gtab, >>>>>>>>>>>> laplacian_regularization=True, >>>>>>>>>>>> # this regularization enhances reproducibility of estimated q-space indices >>>>>>>>>>>> by imposing smoothness >>>>>>>>>>>> laplacian_weighting="GCV", # >>>>>>>>>>>> this makes it use generalized cross-validation to find the best >>>>>>>>>>>> regularization weight >>>>>>>>>>>> positivity_constraint=True) # >>>>>>>>>>>> this ensures the estimated PDF is positive >>>>>>>>>>>> mapfit = mapmod.fit(data) >>>>>>>>>>>> pa = mapfit.pa() >>>>>>>>>>>> >>>>>>>>>>>> Aside from the original MAPMRI citation for Ozarslan et al. >>>>>>>>>>>> (2013), note that the relevant citation for dipy's laplacian-regularized >>>>>>>>>>>> MAP-MRI implementation is [1]. >>>>>>>>>>>> [1] Fick, Rutger HJ, et al. "MAPL: Tissue microstructure >>>>>>>>>>>> estimation using Laplacian-regularized MAP-MRI and its application to HCP >>>>>>>>>>>> data." *NeuroImage* 134 (2016): 365-385. >>>>>>>>>>>> >>>>>>>>>>>> Hope it helps and let me know if you need anything else, >>>>>>>>>>>> Rutger >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On 12 January 2018 at 21:48, Ping-Hong Yeh < >>>>>>>>>>>> pinghongyeh at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Roger, >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks for the prompt reply. >>>>>>>>>>>>> May I have the code for estimating PA? >>>>>>>>>>>>> >>>>>>>>>>>>> Ping >>>>>>>>>>>>> >>>>>>>>>>>>> On Jan 12, 2018 3:21 PM, "Rutger Fick" >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Ping, >>>>>>>>>>>>>> >>>>>>>>>>>>>> MAPL is just a name for using laplacian-regularized MAP-MRI. >>>>>>>>>>>>>> If you're using the dipy mapmri implementation then you're using MAPL by >>>>>>>>>>>>>> default. >>>>>>>>>>>>>> From a fitted mapmri model you can estimate overall >>>>>>>>>>>>>> non-gaussianity using fitted_model.ng(), and parallel and perpendicular >>>>>>>>>>>>>> non-Gaussianity using ng_parallel() and ng_perpendic >>>>>>>>>>>>>> perpendicularular(). >>>>>>>>>>>>>> Propagator Anisotropic is not included in the current dipy >>>>>>>>>>>>>> implementation. However, I do have a personal version of dipy's mapmri >>>>>>>>>>>>>> implementation that includes it, if you're interested. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Best, >>>>>>>>>>>>>> Rutger >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 12 January 2018 at 16:49, Ping-Hong Yeh < >>>>>>>>>>>>>> pinghongyeh at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi DIPY users, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I would like to know the way of estimating non-Gaussian and >>>>>>>>>>>>>>> PA, mentioned in the Avram et al. ?Clinical feasibility of >>>>>>>>>>>>>>> using mean apparent propagator (MAP) MRI to characterize brain tissue >>>>>>>>>>>>>>> microstructure? paper, using MAPMRI or MAPL model. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thank you. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Ping >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Neuroimaging mailing list >>>>>>>>>>> Neuroimaging at python.org >>>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Neuroimaging mailing list >>>>>>>>>> Neuroimaging at python.org >>>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Neuroimaging mailing list >>>>>>>>> Neuroimaging at python.org >>>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>>> >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Neuroimaging mailing list >>>>>>>> Neuroimaging at python.org >>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Neuroimaging mailing list >>>>>>> Neuroimaging at python.org >>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Neuroimaging mailing list >>>>>> Neuroimaging at python.org >>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: data_DIFFPREP_t2acpc_proc_DRBUDDI_up_final_no7b0_invY_norm_long.bvecs Type: application/octet-stream Size: 11546 bytes Desc: not available URL: From pinghongyeh at gmail.com Sat Apr 21 13:54:07 2018 From: pinghongyeh at gmail.com (Ping-Hong Yeh) Date: Sat, 21 Apr 2018 13:54:07 -0400 Subject: [Neuroimaging] [DIPY] propagator anisotropy estimation using MAP(L)MRI In-Reply-To: References: Message-ID: Hi Rutger, The failure was caused by multiple [ 0 0 0] arrays in the gradient table that were used for acquiring non-diffusion weighted volumes. It started running after I modified the gradient table by adding 1 to the z-direction of [ 0 0 0] to become [ 0 0 1]. Can dmipy import the gradient DWI volumes with multiple non-diffusion weighted volumes interspersed in-between? Thank you. Ping -------------- next part -------------- An HTML attachment was scrubbed... URL: From fick.rutger at gmail.com Sun Apr 22 18:10:06 2018 From: fick.rutger at gmail.com (Rutger Fick) Date: Mon, 23 Apr 2018 00:10:06 +0200 Subject: [Neuroimaging] [DIPY] propagator anisotropy estimation using MAP(L)MRI In-Reply-To: References: Message-ID: Hi Ping, Great to hear you're trying the toolbox! Thanks for pointing out the bug, I just fixed it in the repository, so you should be able to load the gradient directions without having the error now. Dmipy is completely general in that it can import any PGSE-based acquisition scheme with any number of non-diffusion weighted volumes. Dmipy internally normalizes the signal according to the mean of all b0-values, and automatically detects which measurements belong to the same acquisition shell. Let me know if you have any more questions or just generally what your experience is using dmipy :-) Best, Rutger On 21 April 2018 at 19:54, Ping-Hong Yeh wrote: > Hi Rutger, > > The failure was caused by multiple [ 0 0 0] arrays in the gradient table > that were used for acquiring non-diffusion weighted volumes. It started > running after I modified the gradient table by adding 1 to the z-direction > of [ 0 0 0] to become [ 0 0 1]. > Can dmipy import the gradient DWI volumes with multiple non-diffusion > weighted volumes interspersed in-between? > > Thank you. > > Ping > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinghongyeh at gmail.com Mon Apr 23 11:33:56 2018 From: pinghongyeh at gmail.com (Ping-Hong Yeh) Date: Mon, 23 Apr 2018 11:33:56 -0400 Subject: [Neuroimaging] [DIPY] propagator anisotropy estimation using MAP(L)MRI In-Reply-To: References: Message-ID: Hi Rutger, Thanks for the fix. Do have the estimate of approximate time needed for doing mcdmi_fod_fit on a data of 240*240*187 with 1mm in resolution for total 289 volumes? It has been running for more than 2 days on a MAC OS with 2 X 2.66 GHz 6-Core, 96GB memory machine. Thank you. Ping On Sun, Apr 22, 2018 at 6:10 PM, Rutger Fick wrote: > Hi Ping, > > Great to hear you're trying the toolbox! > > Thanks for pointing out the bug, I just fixed it in the repository, so you > should be able to load the gradient directions without having the error now. > > Dmipy is completely general in that it can import any PGSE-based > acquisition scheme with any number of non-diffusion weighted volumes. Dmipy > internally normalizes the signal according to the mean of all b0-values, > and automatically detects which measurements belong to the same acquisition > shell. > > Let me know if you have any more questions or just generally what your > experience is using dmipy :-) > > Best, > Rutger > > On 21 April 2018 at 19:54, Ping-Hong Yeh wrote: > >> Hi Rutger, >> >> The failure was caused by multiple [ 0 0 0] arrays in the gradient table >> that were used for acquiring non-diffusion weighted volumes. It started >> running after I modified the gradient table by adding 1 to the z-direction >> of [ 0 0 0] to become [ 0 0 1]. >> Can dmipy import the gradient DWI volumes with multiple non-diffusion >> weighted volumes interspersed in-between? >> >> Thank you. >> >> Ping >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryuvaraj at ntu.edu.sg Mon Apr 23 22:58:50 2018 From: ryuvaraj at ntu.edu.sg (Yuvaraj Rajamanickam (Dr)) Date: Tue, 24 Apr 2018 02:58:50 +0000 Subject: [Neuroimaging] PRNI 2018: CALL FOR ABSTRACTS AND TUTORIALS (DEADLINE: 4 MAY 2018) Message-ID: <3E9B0165C01BA047A1AFFBA5B9161C415E389620@EXCHMBOX34.staff.main.ntu.edu.sg> PRNI 2018: CALL FOR ABSTRACTS AND TUTORIALS (DEADLINE: 4 MAY 2018) ******* please accept our apologies for cross-posting ******* -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CALL FOR ABSTRACTS AND TUTORIALS PRNI 2018: 8th International Workshop on Pattern Recognition in Neuroimaging to be held 12-14 June 2018 at the National University of Singapore, Singapore www.prni.org - @PRNIworkshop - www.facebook.com/PRNIworkshop/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The 8th International Workshop on Pattern Recognition in Neuroimaging (PRNI) will be held at the Centre for Life Sciences Auditorium, National University of Singapore, Singapore on June 12-14, 2018.Pattern recognition techniques are an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. Open call for abstracts: We are still accepting short abstracts for poster presentation. Closing date: 04-MAY-2018. Email the abstract as a pdf attachment to info at prni.org. Open call for tutorial proposals: This year PRNI also has an open call for tutorial proposals. A tutorial can take a form of 2h, 4h or whole day event aimed at demonstrating a computational technique, software tool, or specific concept. Tutorial proposals featuring hands-on demonstrations and promoting diversity (e.g. gender, background, institution) will be preferred. PRNI will cover conference registration fees for up to two tutors per accepted program. The TUTORIAL submission deadline is 4 MAY 2018, 11:59 pm SGT. Email the tutorial proposal as a pdf attachment to info at prni.org. Plenary speakers: Dr. Javeria Hashmi, Dalhousie University, Halifax, Canada Dr. Thomas Yeo, NUS, Singapore Dr. Georg Langs, Medical University of Vienna Dr. Yukiyasu Kamitani, Kyoto University, Japan Dr. Helen Zhou, Duke-NUS, Singapore Please see www.prni.org and follow @PRNIworkshop and www.facebook.com/PRNIworkshop/ for news and details. ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fick.rutger at gmail.com Wed Apr 25 06:06:28 2018 From: fick.rutger at gmail.com (Rutger Fick) Date: Wed, 25 Apr 2018 12:06:28 +0200 Subject: [Neuroimaging] [DIPY] propagator anisotropy estimation using MAP(L)MRI In-Reply-To: References: Message-ID: Hi Ping, That's a great dataset you're using! Estimating the MC-MDI model itself is quite fast, I think a full HCP subject takes less than an hour. However, the secondary parametric FOD estimation in the example (optimizing 2 bundles with slow but accurate MIX approach [1]) takes more than a second per voxel. You can speed this up by using more cores to parallelize the optimization more, but it will inherently not be very fast [1]. Alternatively, you can choose to only fit one bundle and use "brute2fine" optimization, which is much faster and probably feasible to fit to all your volumes in a more reasonable time. Moreover, investigating the dispersion in crossings using this 2-bundle MC-MDI model is very interesting (and has not been done as far as I know), but I don't recommend to fit this model to your data as a whole. Fitting a 2-bundle model to single-bundle data is a degenerate problem (many solutions with similar fitting error), so the dispersion parameters outside crossing bundles won't be meaningful (see our NODDIx example ). To still use this 2-bundle model, I suggest to make a mask where you know there is more than 1 peak (using CSD for example), and only fit the multi-bundle model inside these ROIs. If you're also interested in using tractography-based comparison on your dataset, we'll also soon release CSD-based FOD estimation for the MC-MDI model (winning method of 2017 ISMRM tractography competition ). Let me know how it goes :-) Rutger [1] Farooq, Hamza, et al. "Microstructure imaging of crossing (MIX) white matter fibers from diffusion MRI." *Scientific reports* 6 (2016): 38927. On 23 Apr 2018 17:34, "Ping-Hong Yeh" wrote: Hi Rutger, Thanks for the fix. Do have the estimate of approximate time needed for doing mcdmi_fod_fit on a data of 240*240*187 with 1mm in resolution for total 289 volumes? It has been running for more than 2 days on a MAC OS with 2 X 2.66 GHz 6-Core, 96GB memory machine. Thank you. Ping On Sun, Apr 22, 2018 at 6:10 PM, Rutger Fick wrote: > Hi Ping, > > Great to hear you're trying the toolbox! > > Thanks for pointing out the bug, I just fixed it in the repository, so you > should be able to load the gradient directions without having the error now. > > Dmipy is completely general in that it can import any PGSE-based > acquisition scheme with any number of non-diffusion weighted volumes. Dmipy > internally normalizes the signal according to the mean of all b0-values, > and automatically detects which measurements belong to the same acquisition > shell. > > Let me know if you have any more questions or just generally what your > experience is using dmipy :-) > > Best, > Rutger > > On 21 April 2018 at 19:54, Ping-Hong Yeh wrote: > >> Hi Rutger, >> >> The failure was caused by multiple [ 0 0 0] arrays in the gradient table >> that were used for acquiring non-diffusion weighted volumes. It started >> running after I modified the gradient table by adding 1 to the z-direction >> of [ 0 0 0] to become [ 0 0 1]. >> Can dmipy import the gradient DWI volumes with multiple non-diffusion >> weighted volumes interspersed in-between? >> >> Thank you. >> >> Ping >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinghongyeh at gmail.com Wed Apr 25 09:15:16 2018 From: pinghongyeh at gmail.com (Ping-Hong Yeh) Date: Wed, 25 Apr 2018 13:15:16 +0000 Subject: [Neuroimaging] [DIPY] propagator anisotropy estimation using MAP(L)MRI In-Reply-To: References: Message-ID: Hi Rutger, Thank you for the details. Could you elaborate on doing the parallel processing using multi-threads, multi-cores? For the FODs estimation, is the 2-bundle fitting a default model? how could the ROI mask be created based on the CSD parameters? Can the FODs be saved and called for later fiber tracking use? Thank you for providing this great tool. Ping On Wed, Apr 25, 2018, 6:06 AM Rutger Fick wrote: > Hi Ping, > > That's a great dataset you're using! > > Estimating the MC-MDI model itself is quite fast, I think a full HCP > subject takes less than an hour. > However, the secondary parametric FOD estimation in the example > (optimizing 2 bundles with slow but accurate MIX approach [1]) takes more > than a second per voxel. > You can speed this up by using more cores to parallelize the optimization > more, but it will inherently not be very fast [1]. > Alternatively, you can choose to only fit one bundle and use "brute2fine" > optimization, which is much faster and probably feasible to fit to all your > volumes in a more reasonable time. > > Moreover, investigating the dispersion in crossings using this 2-bundle > MC-MDI model is very interesting (and has not been done as far as I know), > but I don't recommend to fit this model to your data as a whole. > Fitting a 2-bundle model to single-bundle data is a degenerate problem > (many solutions with similar fitting error), so the dispersion parameters > outside crossing bundles won't be meaningful (see our NODDIx example > > ). > To still use this 2-bundle model, I suggest to make a mask where you know > there is more than 1 peak (using CSD for example), and only fit the > multi-bundle model inside these ROIs. > > If you're also interested in using tractography-based comparison on your > dataset, we'll also soon release CSD-based FOD estimation for the MC-MDI > model (winning method of 2017 ISMRM tractography competition > ). > > Let me know how it goes :-) > Rutger > > [1] Farooq, Hamza, et al. "Microstructure imaging of crossing (MIX) white > matter fibers from diffusion MRI." *Scientific reports* 6 (2016): 38927. > > > On 23 Apr 2018 17:34, "Ping-Hong Yeh" wrote: > > Hi Rutger, > > Thanks for the fix. > > Do have the estimate of approximate time needed for doing mcdmi_fod_fit on > a data of 240*240*187 with 1mm in resolution for total 289 volumes? > > It has been running for more than 2 days on a MAC OS with 2 X 2.66 GHz > 6-Core, 96GB memory machine. > > > Thank you. > > Ping > > On Sun, Apr 22, 2018 at 6:10 PM, Rutger Fick > wrote: > >> Hi Ping, >> >> Great to hear you're trying the toolbox! >> >> Thanks for pointing out the bug, I just fixed it in the repository, so >> you should be able to load the gradient directions without having the error >> now. >> >> Dmipy is completely general in that it can import any PGSE-based >> acquisition scheme with any number of non-diffusion weighted volumes. Dmipy >> internally normalizes the signal according to the mean of all b0-values, >> and automatically detects which measurements belong to the same acquisition >> shell. >> >> Let me know if you have any more questions or just generally what your >> experience is using dmipy :-) >> >> Best, >> Rutger >> >> On 21 April 2018 at 19:54, Ping-Hong Yeh wrote: >> >>> Hi Rutger, >>> >>> The failure was caused by multiple [ 0 0 0] arrays in the gradient >>> table that were used for acquiring non-diffusion weighted volumes. It >>> started running after I modified the gradient table by adding 1 to the >>> z-direction of [ 0 0 0] to become [ 0 0 1]. >>> Can dmipy import the gradient DWI volumes with multiple non-diffusion >>> weighted volumes interspersed in-between? >>> >>> Thank you. >>> >>> Ping >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fick.rutger at gmail.com Wed Apr 25 11:50:03 2018 From: fick.rutger at gmail.com (Rutger Fick) Date: Wed, 25 Apr 2018 17:50:03 +0200 Subject: [Neuroimaging] [DIPY] propagator anisotropy estimation using MAP(L)MRI In-Reply-To: References: Message-ID: Hi Ping, To answer your questions: On parallelization: Dmipy uses the pathos package to parallelize voxel-wise fitting of the voxels inside the estimation mask. It's doing multi-processing (not multi-threading), so it depends on the number of cores your computer/cluster has. It should use it automatically after you pip install pathos. On Modeling: Dmipy's philosophy is that there is no 'default' model. The one in the example is just to show off dmipy's capabilities. You can generate the FOD model with only 1 bundle by just setting Ncompartments=1 in parametric_fod_model = mcmdi_fit_patch.return_parametric_fod_model( distribution='watson', Ncompartments=1) The fastest way to create a map of the number of estimated peaks per voxel is dipy's peaks_from_model example . Keep in mind that dipy's CSD model implementation is not appropriate for multi-shell data, so you need to fit it to only one of your higher b-value shells (or risk noisy results). Alternatively, you can directly dmipy's (possibly multi-compartment) multi-shell implementation of CSD illustrated here , but then for the moment you need to pass the estimated FOD sphere function to dipy's "peak_directions " function to get the peaks. There's no direct function in dmipy (or dipy) to save the FODs to disk, but you can easily extract the FOD spherical harmonics or sphere functions arrays and save them to disk as a nifti using nibabel, see bottom here , to be used whenever you want. Best, Rutger On 25 April 2018 at 15:15, Ping-Hong Yeh wrote: > Hi Rutger, > > Thank you for the details. > > Could you elaborate on doing the parallel processing using multi-threads, > multi-cores? > For the FODs estimation, is the 2-bundle fitting a default model? how > could the ROI mask be created based on the CSD parameters? Can the FODs be > saved and called for later fiber tracking use? > > Thank you for providing this great tool. > > Ping > > > On Wed, Apr 25, 2018, 6:06 AM Rutger Fick wrote: > >> Hi Ping, >> >> That's a great dataset you're using! >> >> Estimating the MC-MDI model itself is quite fast, I think a full HCP >> subject takes less than an hour. >> However, the secondary parametric FOD estimation in the example >> (optimizing 2 bundles with slow but accurate MIX approach [1]) takes more >> than a second per voxel. >> You can speed this up by using more cores to parallelize the optimization >> more, but it will inherently not be very fast [1]. >> Alternatively, you can choose to only fit one bundle and use "brute2fine" >> optimization, which is much faster and probably feasible to fit to all your >> volumes in a more reasonable time. >> >> Moreover, investigating the dispersion in crossings using this 2-bundle >> MC-MDI model is very interesting (and has not been done as far as I know), >> but I don't recommend to fit this model to your data as a whole. >> Fitting a 2-bundle model to single-bundle data is a degenerate problem >> (many solutions with similar fitting error), so the dispersion parameters >> outside crossing bundles won't be meaningful (see our NODDIx example >> >> ). >> To still use this 2-bundle model, I suggest to make a mask where you know >> there is more than 1 peak (using CSD for example), and only fit the >> multi-bundle model inside these ROIs. >> >> If you're also interested in using tractography-based comparison on your >> dataset, we'll also soon release CSD-based FOD estimation for the MC-MDI >> model (winning method of 2017 ISMRM tractography competition >> ). >> >> Let me know how it goes :-) >> Rutger >> >> [1] Farooq, Hamza, et al. "Microstructure imaging of crossing (MIX) white >> matter fibers from diffusion MRI." *Scientific reports* 6 (2016): 38927. >> >> >> On 23 Apr 2018 17:34, "Ping-Hong Yeh" wrote: >> >> Hi Rutger, >> >> Thanks for the fix. >> >> Do have the estimate of approximate time needed for doing mcdmi_fod_fit >> on a data of 240*240*187 with 1mm in resolution for total 289 volumes? >> >> It has been running for more than 2 days on a MAC OS with 2 X 2.66 GHz >> 6-Core, 96GB memory machine. >> >> >> Thank you. >> >> Ping >> >> On Sun, Apr 22, 2018 at 6:10 PM, Rutger Fick >> wrote: >> >>> Hi Ping, >>> >>> Great to hear you're trying the toolbox! >>> >>> Thanks for pointing out the bug, I just fixed it in the repository, so >>> you should be able to load the gradient directions without having the error >>> now. >>> >>> Dmipy is completely general in that it can import any PGSE-based >>> acquisition scheme with any number of non-diffusion weighted volumes. Dmipy >>> internally normalizes the signal according to the mean of all b0-values, >>> and automatically detects which measurements belong to the same acquisition >>> shell. >>> >>> Let me know if you have any more questions or just generally what your >>> experience is using dmipy :-) >>> >>> Best, >>> Rutger >>> >>> On 21 April 2018 at 19:54, Ping-Hong Yeh wrote: >>> >>>> Hi Rutger, >>>> >>>> The failure was caused by multiple [ 0 0 0] arrays in the gradient >>>> table that were used for acquiring non-diffusion weighted volumes. It >>>> started running after I modified the gradient table by adding 1 to the >>>> z-direction of [ 0 0 0] to become [ 0 0 1]. >>>> Can dmipy import the gradient DWI volumes with multiple non-diffusion >>>> weighted volumes interspersed in-between? >>>> >>>> Thank you. >>>> >>>> Ping >>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asrozhnov_1 at edu.hse.ru Thu Apr 26 04:39:08 2018 From: asrozhnov_1 at edu.hse.ru (=?koi8-r?B?8s/Wzs/XIOHMxcvTwc7E0iDzxdLHxcXXyd4=?=) Date: Thu, 26 Apr 2018 08:39:08 +0000 Subject: [Neuroimaging] Time series per voxel Message-ID: Hello! My name is Alexander, I am student from Russia and I am interested in MRI investigations. I am struggling with one problem which I can not solve for a long. Can you help me please? My aim is to obtain time series for each voxel in exact ROI. I have found methods which allow me to get one time serie for one ROI, but it is already preprocessed, what is not needed. Tell me please is there any way to solve this problem using implemented functions? Sincerely, Alexander Rozhnov From garyfallidis at gmail.com Fri Apr 27 22:24:56 2018 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Sat, 28 Apr 2018 02:24:56 +0000 Subject: [Neuroimaging] Brainhack Global at Indiana University Bloomington - 3 days left to register In-Reply-To: References: Message-ID: Dear all! 3 days left to register in the BrainHack Global Event @IU. Click below! *https://brainhack.sice.indiana.edu * This year's Brainhack Global @ IU, will include: - *Amazing research talks* - *Unique hands-on tutorials* - *Social coding sessions* Speakers: - *Carlo Pierpaoli (NIH)* - *Justin Gardner (USC)* - *Kesshi Jordan (UCSF)* - *Dogu Baran Aydogan (USC)* - *Konstantinos Arfanakis (IIT)* - *Yaroslav Halchenko (DART)* - *Divya Varadarajan (USC)* Tutorials: - *Okan Iranoglu (NIH)*: Tortoise tutorials - *Eleftherios Garyfallidis (IU)* : DIPY tutorials - *Franco Pestilli (IU)*: BrainLife tutorials Projects: Automatic segmentation of bundles, mastering tracking, correcting motion for Eulerian video magnification, decode fMRI data and many more.. Demos: Brain connectivity toolbox, BrainSuite, Amatria, VR and many more.. Q&A sessions: Bring your data and ask questions! *See you @ the Hack! * Please *forward* to *all* interested parties!!! *See program **here* !!! Best regards, *Eleftherios Garyfallidis, PhD* Assistant Professor *Intelligent Systems Engineering* *Indiana University* Luddy Hall 700 N Woodlawn Bloomington, IN 47408 *https://grg.sice.indiana.edu http://dipy.org * -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe at pallier.org Sat Apr 28 03:41:35 2018 From: christophe at pallier.org (Christophe Pallier) Date: Sat, 28 Apr 2018 09:41:35 +0200 Subject: [Neuroimaging] Time series per voxel In-Reply-To: References: Message-ID: What do you mean 'preprocessed'? You can extract time-series from the original EPI files (although if they are not corrected for movement and slice timing delays, it may not be a great idea). Christophe On Thu, Apr 26, 2018 at 10:39 AM, ?????? ????????? ????????? via Neuroimaging wrote: > Hello! > > My name is Alexander, I am student from Russia and I am interested in MRI > investigations. I am struggling with one problem which I can not solve for > a long. Can you help me please? > > My aim is to obtain time series for each voxel in exact ROI. I have found > methods which allow me to get one time serie for one ROI, but it is already > preprocessed, what is not needed. Tell me please is there any way to solve > this problem using implemented functions? > > Sincerely, > Alexander Rozhnov > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- -- Christophe Pallier INSERM-CEA Cognitive Neuroimaging Lab, Neurospin, bat 145, 91191 Gif-sur-Yvette Cedex, France Tel: 00 33 1 69 08 79 34 Personal web site: http://www.pallier.org Lab web site: http://www.unicog.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Fri Apr 27 18:19:15 2018 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Fri, 27 Apr 2018 22:19:15 +0000 Subject: [Neuroimaging] Brainhack Global at Indiana University Bloomington - 3 days left to register Message-ID: Dear all! 3 days left to register in the BrainHack Global Event @IU. Click below! *https://brainhack.sice.indiana.edu * This year's Brainhack Global @ IU, will include: - *Amazing research talks* - *Unique hands-on tutorials* - *Social coding sessions* Speakers: - *Carlo Pierpaoli (NIH)* - *Justin Gardner (USC)* - *Kesshi Jordan (UCSF)* - *Dogu Baran Aydogan (USC)* - *Konstantinos Arfanakis (IIT)* - *Yaroslav Halchenko (DART)* - *Divya Varadarajan (USC)* Tutorials: - *Okan Iranoglu (NIH)*: Tortoise tutorials - *Eleftherios Garyfallidis (IU)* : DIPY tutorials - *Franco Pestilli (IU)*: BrainLife tutorials Projects: Automatic segmentation of bundles, mastering tracking, correcting motion for Eulerian video magnification, decode fMRI data and many more.. Demos: Brain connectivity toolbox, BrainSuite, Amatria, VR and many more.. Q&A sessions: Bring your data and ask questions! *See you @ the Hack! * Please *forward* to *all* interested parties!!! See program attached! Best regards, *Eleftherios Garyfallidis, PhD* Assistant Professor *Intelligent Systems Engineering* *Indiana University* Luddy Hall 700 N Woodlawn Bloomington, IN 47408 *https://grg.sice.indiana.edu http://dipy.org * -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bh18_indiana_university_progam.pdf Type: application/pdf Size: 785733 bytes Desc: not available URL: From roman.rodionov at gmail.com Sat Apr 28 10:29:02 2018 From: roman.rodionov at gmail.com (Roman Rodionov) Date: Sat, 28 Apr 2018 15:29:02 +0100 Subject: [Neuroimaging] Time series per voxel In-Reply-To: References: Message-ID: Hi Alexander, Marsbar (toolbox for SPM) if using Matlab is an option. HTH Roman On Thu, Apr 26, 2018 at 9:39 AM, ?????? ????????? ????????? via Neuroimaging wrote: > Hello! > > My name is Alexander, I am student from Russia and I am interested in MRI > investigations. I am struggling with one problem which I can not solve for > a long. Can you help me please? > > My aim is to obtain time series for each voxel in exact ROI. I have found > methods which allow me to get one time serie for one ROI, but it is already > preprocessed, what is not needed. Tell me please is there any way to solve > this problem using implemented functions? > > Sincerely, > Alexander Rozhnov > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davclark at gmail.com Sat Apr 28 17:51:11 2018 From: davclark at gmail.com (Dav Clark) Date: Sat, 28 Apr 2018 21:51:11 +0000 Subject: [Neuroimaging] Time series per voxel In-Reply-To: References: Message-ID: I'm pretty sure the OP means individual as opposed to mean over an ROI? So the beginning of the choose your own adventure is whether you understand numpy indexing. If so, you can easily grab a time series or a slice or whatever. Nibabel gives you the image as a numpy array, and if you're just getting started if encourage you to just grab each location / time series one at a time. See here: https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.indexing.html http://nipy.org/nibabel/gettingstarted.html Cheers, Dav On Sat, Apr 28, 2018, 3:42 AM Christophe Pallier wrote: > What do you mean 'preprocessed'? > > You can extract time-series from the original EPI files (although if they > are not corrected for movement and slice timing delays, it may not be a > great idea). > > Christophe > > On Thu, Apr 26, 2018 at 10:39 AM, ?????? ????????? ????????? via > Neuroimaging wrote: > >> Hello! >> >> My name is Alexander, I am student from Russia and I am interested in >> MRI investigations. I am struggling with one problem which I can not solve >> for a long. Can you help me please? >> >> My aim is to obtain time series for each voxel in exact ROI. I have found >> methods which allow me to get one time serie for one ROI, but it is already >> preprocessed, what is not needed. Tell me please is there any way to solve >> this problem using implemented functions? >> >> Sincerely, >> Alexander Rozhnov >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > > > -- > -- > Christophe Pallier > INSERM-CEA Cognitive Neuroimaging Lab, Neurospin, bat 145, > 91191 Gif-sur-Yvette Cedex, France > Tel: 00 33 1 69 08 79 34 > Personal web site: http://www.pallier.org > Lab web site: http://www.unicog.org > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Apr 28 18:10:26 2018 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 29 Apr 2018 00:10:26 +0200 Subject: [Neuroimaging] Time series per voxel In-Reply-To: References: Message-ID: <20180428221026.ueyu5nwkxhgqloyf@phare.normalesup.org> In that case, the simplest thing might be to use nilearn's NiftiMasker, which does pretty much this: http://nilearn.github.io/manipulating_images/masker_objects.html Ga?l On Sat, Apr 28, 2018 at 09:51:11PM +0000, Dav Clark wrote: > I'm pretty sure the OP means individual as opposed to mean over an ROI? > So the beginning of the choose your own adventure is whether you understand > numpy indexing. If so, you can easily grab a time series or a slice or > whatever. Nibabel gives you the image as a numpy array, and if you're just > getting started if encourage you to just grab each location / time series one > at a time. See here: > https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.indexing.html > http://nipy.org/nibabel/gettingstarted.html > Cheers, > Dav > On Sat, Apr 28, 2018, 3:42 AM Christophe Pallier > wrote: > What do you mean 'preprocessed'? > You can extract time-series from the original EPI files (although if they > are not corrected for movement and slice timing delays, it may not be a > great idea). > Christophe > On Thu, Apr 26, 2018 at 10:39 AM, ?????? ????????? ????????? via > Neuroimaging wrote: > Hello! > My name is Alexander, I am student from Russia and I am interested in > MRI? investigations. I am struggling with one problem which I can not > solve for a long. Can you help me please? > My aim is to obtain time series for each voxel in exact ROI. I have > found methods which allow me to get one time serie for one ROI, but it > is already preprocessed, what is not needed. Tell me please is there > any way to solve this problem using implemented functions? > Sincerely, > Alexander Rozhnov > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Senior Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux