From valeriehayot at gmail.com Fri Mar 3 13:33:40 2017 From: valeriehayot at gmail.com (Valerie Hayot-Sasson) Date: Fri, 3 Mar 2017 13:33:40 -0500 Subject: [Neuroimaging] Saving large NIfTI1 images using NiBabel Message-ID: Hi, I?m trying to create a large empty matrix of size 3850 x 3025 x 3500, load it into NiBabel and save it as a NIfTI1 image. Loading the matrix into NiBabel works great, but I am unable to save it afterwards using nibabel.save(matrix, filename) as it uses too many resources. Is there another way to go about this (ex save the NIfTI header alone and then append data to it without requiring as much memory)? Thank you, Valerie From arokem at gmail.com Mon Mar 6 12:21:58 2017 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 6 Mar 2017 09:21:58 -0800 Subject: [Neuroimaging] Call for papers: Neuroscience in Python -- mini-symposium at the Scientific Computing in Python conference, 2017 (Scipy), July 10th-16th, UT Austin Message-ID: SciPy 2017, the sixteenth annual Scientific Computing with Python conference, will be held this July 10th-16th in Austin, Texas. SciPy is a community dedicated to the advancement of scientific computing through open source Python software for mathematics, science, and engineering. The annual SciPy Conference allows participants from all types of organizations to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. This year?s conference will feature a mini-symposium on Neuroscience in Python. Developers and scientists using Python to answer questions about the brain at any scale, and using any experimental/computational method, are invited to submit their proposals to participate and present their work at the conference For details and submission: http://scipy2017.scipy.org/ehome/220975/493425/ Important dates: Abstract submission deadline: March 27th Notification of acceptance: May 2nd Chairs: Olivia Guest, University College London Ariel Rokem, The University of Washington eScience Institute -------------- next part -------------- An HTML attachment was scrubbed... URL: From avesani at fbk.eu Thu Mar 9 12:57:47 2017 From: avesani at fbk.eu (Paolo Avesani) Date: Thu, 9 Mar 2017 18:57:47 +0100 Subject: [Neuroimaging] [Dipy] VTK animation of tractography Message-ID: I need to prepare an animated visualization of tractography. The main objective is to show sequentially some intermediate steps of processing of tractography. For example a sequence of three scenes: (i) the visualization of the whole tractogram, (ii) the visualization of a ROI, (iii) the visualization of the streamlines selected by the ROI and the hide of the remaining ones. I had in mind to use the VTK support of Dipy. The issue I'm facing with is that once rendered and showed the scene, it is no more possible to incrementally revise the objects to be rendered. I need to "destroy" the window before to build the next scene. It affects the perception of "animation". Is there a "simple" way to visualize/hide objects in the window after the "show" command? Currently after the "show" command, the listener doesn't release the prompt for new commands. Other solutions are equally welcome. Paolo -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Thu Mar 9 13:06:34 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Thu, 09 Mar 2017 18:06:34 +0000 Subject: [Neuroimaging] [Dipy] VTK animation of tractography In-Reply-To: References: Message-ID: Hi Paolo, The new user interface allows that. You could have buttons that can hide objects or manipulate other objects. Also you can have a timer callback which can allow you to animate. How urgently you need this feature? We can either give you some older branches that show you how to animate or you can wait until all the UI components are in place. It all depends if you want a quick solution or a long term solution. Best regards, Eleftherios On Thu, Mar 9, 2017 at 12:58 PM Paolo Avesani wrote: > I need to prepare an animated visualization of tractography. > The main objective is to show sequentially some intermediate steps of > processing of tractography. > > For example a sequence of three scenes: (i) the visualization of the whole > tractogram, (ii) the visualization of a ROI, (iii) the visualization of the > streamlines selected by the ROI and the hide of the remaining ones. > > I had in mind to use the VTK support of Dipy. The issue I'm facing with is > that once rendered and showed the scene, it is no more possible to > incrementally revise the objects to be rendered. I need to "destroy" the > window before to build the next scene. It affects the perception of > "animation". > > Is there a "simple" way to visualize/hide objects in the window after the > "show" command? > Currently after the "show" command, the listener doesn't release the > prompt for new commands. > > Other solutions are equally welcome. > Paolo > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avesani at fbk.eu Fri Mar 10 12:32:46 2017 From: avesani at fbk.eu (Paolo Avesani) Date: Fri, 10 Mar 2017 18:32:46 +0100 Subject: [Neuroimaging] [Dipy] VTK animation of tractography In-Reply-To: References: Message-ID: I need to prepare the animation for the end of next week. I'm planning to work during this weekend, therefore I would proceed with an older branch. The timer callback seems to fit my goal. Best Paolo On Thu, Mar 9, 2017 at 7:06 PM, Eleftherios Garyfallidis wrote: > Hi Paolo, > > The new user interface allows that. You could have buttons that can hide > objects or manipulate other objects. > Also you can have a timer callback which can allow you to animate. > > How urgently you need this feature? We can either give you some older > branches that show you how to animate > or you can wait until all the UI components are in place. It all depends > if you want a quick solution or a long term > solution. > > Best regards, > Eleftherios > > > > > > On Thu, Mar 9, 2017 at 12:58 PM Paolo Avesani wrote: > >> I need to prepare an animated visualization of tractography. >> The main objective is to show sequentially some intermediate steps of >> processing of tractography. >> >> For example a sequence of three scenes: (i) the visualization of the >> whole tractogram, (ii) the visualization of a ROI, (iii) the visualization >> of the streamlines selected by the ROI and the hide of the remaining ones. >> >> I had in mind to use the VTK support of Dipy. The issue I'm facing with >> is that once rendered and showed the scene, it is no more possible to >> incrementally revise the objects to be rendered. I need to "destroy" the >> window before to build the next scene. It affects the perception of >> "animation". >> >> Is there a "simple" way to visualize/hide objects in the window after the >> "show" command? >> Currently after the "show" command, the listener doesn't release the >> prompt for new commands. >> >> Other solutions are equally welcome. >> Paolo >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- ------------------------------------------------------- Paolo Avesani Fondazione Bruno Kessler via Sommarive 18, 38050 Povo (TN) - I phone: +39 0461 314336 fax: +39 0461 302040 email: avesani at fbk.eu web: avesani.fbk.eu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmzuo at nlpr.ia.ac.cn Fri Mar 17 08:29:49 2017 From: nmzuo at nlpr.ia.ac.cn (ZUO, Nianming) Date: Fri, 17 Mar 2017 20:29:49 +0800 (GMT+08:00) Subject: [Neuroimaging] [PySurfer] Display error: Black screen with sparse spots Message-ID: Dear All, I have encountered a strange error when I run the demo plot_basics.py. It prompts a show window but it is almost black with nothing except some white spots. The attached is the snapshot (but I am not sure it could displayed in the mailing list). My system is Ubuntu 14.04, Python 2.7.6, GCC 4.8.4, Mayavi 4.5.0, The Mayavi itself can work successfully (with all GUI window and menus and functions) I have tested in pysurfer (in the command with IPython 1.2.1-2 support ), and in python command, both of them show the same window as the snapshot. What's more, I have checkout (by git) several version, including the newest dev version, maint/0.7 and v0.6, but without any success. Could anyone give me some advice ? Thanks, Nicozuo -- -------------- PhD, Brainnetome Center NLPR, Institute of Automation Chinese Academy of Sciences Tel, +86 10 8254 4768 Fax, +86 10 8254 4777 http://www.brainnetome.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PySurfer_show_error.png Type: image/png Size: 16993 bytes Desc: not available URL: From arokem at gmail.com Mon Mar 20 17:54:08 2017 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 20 Mar 2017 14:54:08 -0700 Subject: [Neuroimaging] Connectionists: Call for papers: Neuroscience in Python -- mini-symposium at the Scientific Computing in Python conference, 2017 (Scipy), July 10th-16th, UT Austin In-Reply-To: References: Message-ID: The deadline for submission of talk proposals for the *Neuroscience in Python *mini-symposium at *Scipy* has been extended to March 30th! For details and submission: http://scipy2017.scipy.org/ehome/220975/493425/ On Mon, Mar 6, 2017 at 9:21 AM, Ariel Rokem wrote: > SciPy 2017, the sixteenth annual Scientific Computing with Python > conference, will be held this July 10th-16th in Austin, Texas. SciPy is a > community dedicated to the advancement of scientific computing through open > source Python software for mathematics, science, and engineering. The > annual SciPy Conference allows participants from all types of organizations > to showcase their latest projects, learn from skilled users and developers, > and collaborate on code development. > > This year?s conference will feature a mini-symposium on Neuroscience in > Python. Developers and scientists using Python to answer questions about > the brain at any scale, and using any experimental/computational method, > are invited to submit their proposals to participate and present their work > at the conference > > For details and submission: http://scipy2017.scipy.org/eho > me/220975/493425/ > > Important dates: > > Abstract submission deadline: March 27th > > Notification of acceptance: May 2nd > > > Chairs: > > Olivia Guest, University College London > > Ariel Rokem, The University of Washington eScience Institute > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.gramfort at telecom-paristech.fr Fri Mar 24 05:13:34 2017 From: alexandre.gramfort at telecom-paristech.fr (Alexandre Gramfort) Date: Fri, 24 Mar 2017 10:13:34 +0100 Subject: [Neuroimaging] [ANN] MNE-Python 0.14 Message-ID: Hi, We are pleased to announce the new 0.14 release of MNE-Python. As usual this release comes with new features, bug fixes, and many improvements to usability, visualization, and documentation. A few highlights ============ - We have added I/O support for Artemis123 infant/toddler MEG data - We no longer require MNE-C for BEM and scalp processing steps - Interactive annotation mode is now available in raw plotting - Dipole locations can now be visualized with MRI slice overlay - Add minimum-phase filtering option in mne.io.Raw.filter() - New mne.datasets.visual_92_categories dataset with an example of Representational Similarity Analysis (RSA) Notable API changes ================ - Fix bug with DICS and LCMV (functions mne.beamformer.lcmv and mne.beamformer.dics ) where regularization was done improperly. The default reg=0.01 has been changed to reg=0.05 - The filtering functions band_pass_filter, band_stop_filter, low_pass_filter, and high_pass_filter have been deprecated in favor of mne.filter.filter_data - mne.decoding.Scaler now scales each channel independently using data from all time points (epochs and times) instead of scaling all channels for each time point. It also now accepts parameter scalings to determine the data scaling method (default is None to use static channel-type-based scaling) - The default tmax=60. In mne.io.Raw.plot_psd will change to tmax=np.inf in 0.15 - The mne.decoding.LinearModel class will no longer support plot_filters and plot_patterns, use mne.EvokedArray with mne.decoding.get_coef instead - Made functions mne.time_frequency.tfr_array_multitaper , mne.time_frequency.tfr_array_morlet , mne.time_frequency.tfr_array_stockwell , mne.time_frequency.psd_array_multitaper and mne.time_frequency.psd_array_welch public to allow computing TFRs and PSDs on numpy arrays - mne.preprocessing.ICA.fit now rejects data annotated bad by default when working with Raw. For a full list of improvements and API changes, see: http://martinos.org/mne/stable/whats_new.html#version-0-14 To install the latest release the following command should do the job: pip install --upgrade --user mne As usual we welcome your bug reports, feature requests, critiques, and contributions. Some links: - https://github.com/mne-tools/mne-python (code + readme on how to install) - http://martinos.org/mne/stable/ (full MNE documentation) Follow us on Twitter: https://twitter.com/mne_python Regards, The MNE-Python developers People who contributed to this release (in alphabetical order): * Alexander Rudiuk * Alexandre Gramfort * Annalisa Pascarella * Antti Rantala * Asish Panda * Burkhard Maess * Chris Holdgraf * Christian Brodbeck * Crist?bal Mo?nne-Loccoz * Daniel McCloy * Denis A. Engemann * Eric Larson * Erkka Heinila * Hermann Sonntag * Jaakko Leppakangas * Jakub Kaczmarzyk * Jean-Remi King * Jon Houck * Jona Sassenhagen * Jussi Nurminen * Keith Doelling * Leonardo S. Barbosa * Lorenz Esch * Lorenzo Alfine * Luke Bloy * Mainak Jas * Marijn van Vliet * Matt Boggess * Matteo Visconti * Mikolaj Magnuski * Niklas Wilming * Paul Pasler * Richard H?chenberger * Sheraz Khan * Stefan Repplinger * Teon Brooks * Yaroslav Halchenko -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Fri Mar 24 05:19:21 2017 From: bertrand.thirion at inria.fr (Bertrand Thirion) Date: Fri, 24 Mar 2017 10:19:21 +0100 (CET) Subject: [Neuroimaging] [ANN] MNE-Python 0.14 In-Reply-To: References: Message-ID: <501591257.37739833.1490347161655.JavaMail.zimbra@inria.fr> Congratulations ! B ----- Mail original ----- > De: "Alexandre Gramfort" > ?: "Neuroimaging analysis in Python" , > megcommunity at jiscmail.ac.uk, neurospin-time-group at googlegroups.com, "mne > analysis" > Envoy?: Vendredi 24 Mars 2017 10:13:34 > Objet: [Neuroimaging] [ANN] MNE-Python 0.14 > Hi, > We are pleased to announce the new 0.14 release of MNE-Python. As usual this > release comes with new features, bug fixes, and many improvements to > usability, visualization, and documentation. > A few highlights > ============ > * > We have added I/O support for Artemis123 infant/toddler MEG data > * > We no longer require MNE-C for BEM and scalp processing steps > * > Interactive annotation mode is now available in raw plotting > * > Dipole locations can now be visualized with MRI slice overlay > * > Add minimum-phase filtering option in mne.io.Raw.filter() > * > New mne.datasets.visual_92_categories dataset with an example of > Representational Similarity Analysis (RSA) > Notable API changes > ================ > * > Fix bug with DICS and LCMV (functions mne.beamformer.lcmv and > mne.beamformer.dics ) where regularization was done improperly. The default > reg=0.01 has been changed to reg=0.05 > * > The filtering functions band_pass_filter, band_stop_filter, low_pass_filter, > and high_pass_filter have been deprecated in favor of mne.filter.filter_data > * > mne.decoding.Scaler now scales each channel independently using data from all > time points (epochs and times) instead of scaling all channels for each time > point. It also now accepts parameter scalings to determine the data scaling > method (default is None to use static channel-type-based scaling) > * > The default tmax=60. In mne.io.Raw.plot_psd will change to tmax=np.inf in > 0.15 > * > The mne.decoding.LinearModel class will no longer support plot_filters and > plot_patterns, use mne.EvokedArray with mne.decoding.get_coef instead > * > Made functions mne.time_frequency.tfr_array_multitaper , > mne.time_frequency.tfr_array_morlet , mne.time_frequency.tfr_array_stockwell > , mne.time_frequency.psd_array_multitaper and > mne.time_frequency.psd_array_welch public to allow computing TFRs and PSDs > on numpy arrays > * > mne.preprocessing.ICA.fit now rejects data annotated bad by default when > working with Raw. > For a full list of improvements and API changes, see: > http://martinos.org/mne/stable/whats_new.html#version-0-14 > To install the latest release the following command should do the job: > pip install --upgrade --user mne > As usual we welcome your bug reports, feature requests, critiques, and > contributions. > Some links: > - https://github.com/mne-tools/mne-python (code + readme on how to install) > - http://martinos.org/mne/stable/ (full MNE documentation) > Follow us on Twitter: https://twitter.com/mne_python > Regards, > The MNE-Python developers > People who contributed to this release (in alphabetical order): > * Alexander Rudiuk > * Alexandre Gramfort > * Annalisa Pascarella > * Antti Rantala > * Asish Panda > * Burkhard Maess > * Chris Holdgraf > * Christian Brodbeck > * Crist?bal Mo?nne-Loccoz > * Daniel McCloy > * Denis A. Engemann > * Eric Larson > * Erkka Heinila > * Hermann Sonntag > * Jaakko Leppakangas > * Jakub Kaczmarzyk > * Jean-Remi King > * Jon Houck > * Jona Sassenhagen > * Jussi Nurminen > * Keith Doelling > * Leonardo S. Barbosa > * Lorenz Esch > * Lorenzo Alfine > * Luke Bloy > * Mainak Jas > * Marijn van Vliet > * Matt Boggess > * Matteo Visconti > * Mikolaj Magnuski > * Niklas Wilming > * Paul Pasler > * Richard H?chenberger > * Sheraz Khan > * Stefan Repplinger > * Teon Brooks > * Yaroslav Halchenko > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From davclark at gmail.com Fri Mar 24 16:57:41 2017 From: davclark at gmail.com (Dav Clark) Date: Fri, 24 Mar 2017 16:57:41 -0400 Subject: [Neuroimaging] Using nibabel to merge 3D image pairs into single-file 4D Nifti1 Message-ID: So, this is a post on stack overflow: http://stackoverflow.com/questions/33737282/copy-header-when-merging-multiple-mri-images-using-nibabel I'm doing something more or less equivalent: ``` sample_path = 'some_path/some_sub*.hdr' fnames = glob(sample_path) imgs = [nib.load(fn) for fn in fnames] img_data = np.stack([img.dataobj for img in imgs], axis=-1) converted = nib.Nifti1Image(img_data, imgs[0].affine, imgs[0].header) converted.to_filename('some_filename.nii.gz') ``` But the output ends up having a severely restricted range (thus rendering by default as black in fslview) as compared to fslmerge -t. fslmerge gets the TR right, and the nibabel version sets TR (pixdim4) to 1.0. Moreover imgs[0].header.get_slice_times() results in an error. Also, while the dimensions appear to be the same, I get different shaped time-series (as well as different absolute values, of course) from the two different conversion paths. I suspect I'm missing a scaling parameter from the files. For now, I am just using fslmerge -t, but since at least some other person out there has also done this (see so question linked above), I figure it's worth asking. If I could get this to work, it becomes very easy to implement in dask and take advantage of the spiffy computer I now have access to... Thanks! Dav ps - is there a searchable archive of the list? If not, I have a buddy who is pretty facile with mailman listservs... From matthew.brett at gmail.com Fri Mar 24 17:10:18 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 24 Mar 2017 21:10:18 +0000 Subject: [Neuroimaging] Using nibabel to merge 3D image pairs into single-file 4D Nifti1 In-Reply-To: References: Message-ID: Hi, On Fri, Mar 24, 2017 at 8:57 PM, Dav Clark wrote: > So, this is a post on stack overflow: > > http://stackoverflow.com/questions/33737282/copy-header-when-merging-multiple-mri-images-using-nibabel > > I'm doing something more or less equivalent: > > ``` > sample_path = 'some_path/some_sub*.hdr' > fnames = glob(sample_path) > imgs = [nib.load(fn) for fn in fnames] > > img_data = np.stack([img.dataobj for img in imgs], axis=-1) > > converted = nib.Nifti1Image(img_data, imgs[0].affine, imgs[0].header) > converted.to_filename('some_filename.nii.gz') > ``` > > But the output ends up having a severely restricted range (thus > rendering by default as black in fslview) as compared to fslmerge -t. > fslmerge gets the TR right, and the nibabel version sets TR (pixdim4) > to 1.0. Moreover imgs[0].header.get_slice_times() results in an error. > > Also, while the dimensions appear to be the same, I get different > shaped time-series (as well as different absolute values, of course) > from the two different conversion paths. I suspect I'm missing a > scaling parameter from the files. > > For now, I am just using fslmerge -t, but since at least some other > person out there has also done this (see so question linked above), I > figure it's worth asking. > > If I could get this to work, it becomes very easy to implement in dask > and take advantage of the spiffy computer I now have access to... Did you try: converted = nib.concat_images(fnames) ? I guess the correct TR and slice timing info is correctly set in the first image header - I mean: first_header = nib.load(fnames[0]).header > ps - is there a searchable archive of the list? If not, I have a buddy > who is pretty facile with mailman listservs... Actually, no, I don't think we have - would be very glad of help. We're using the same python.org mailman hosting as scipy and numpy, so a general solution would be generally useful. Cheers, Matthew From jcohen at polymtl.ca Sat Mar 25 13:04:38 2017 From: jcohen at polymtl.ca (Julien Cohen-Adad) Date: Sat, 25 Mar 2017 13:04:38 -0400 Subject: [Neuroimaging] SCT 3.0.1 Message-ID: Dear Neuroimaging community, We are pleased to announce the 3.0.1 release of the Spinal Cord Toolbox (SCT), which can be downloaded here: https://github.com/neuropoly/spinalcordtoolbox/releases Changes to release: https://github.com/neuropoly/spinalcordtoolbox/blob/release/CHANGES.md Installation instruction: https://sourceforge.net/p/spinalcordtoolbox/wiki/installation/ If you have any question or feature request, please post on the forum: https://sourceforge.net/p/spinalcordtoolbox/discussion/help/ Feedback is always appreciated :-) Best regards, The SCT Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.pron at gmail.com Mon Mar 27 09:46:33 2017 From: alexandre.pron at gmail.com (alexandre pron) Date: Mon, 27 Mar 2017 15:46:33 +0200 Subject: [Neuroimaging] dipy: support of multishell CSD Message-ID: Hello everybody, i would like to process some subjects of the HCP dataset using Dipy. I was wondering if the CSD method implemented in Dipy fully support the multi-shell ? By looking into the code it seems that the algo does not make the diff?rence bewteen shells but I am not sure too understand what is done correctly . If you have any hint ^^ Thanks you very much Alexandre -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Mon Mar 27 10:35:38 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Mon, 27 Mar 2017 14:35:38 +0000 Subject: [Neuroimaging] dipy: support of multishell CSD In-Reply-To: References: Message-ID: Hi Alexander, Great question! Yes we do have the implementation of the Multi-Tissue and Multi-Shell (MTMS) algorithm and we do need beta testers. Here is the link https://github.com/nipy/dipy/pull/1168 I would suggest to look at the tests to understand how to call the function https://github.com/nipy/dipy/pull/1168/files#diff-33ad2cd2560268f467aba204977dbea4 If you use it please do give us feedback and to Bago Amirbekian who wrote the code. Best regards, Eleftherios On Mon, Mar 27, 2017 at 9:48 AM alexandre pron wrote: > Hello everybody, > i would like to process some subjects of the HCP dataset using Dipy. > I was wondering if the CSD method implemented in Dipy fully support the > multi-shell ? > By looking into the code it seems that the algo does not make the > diff?rence bewteen shells but I am not sure too understand what is done > correctly . > > If you have any hint ^^ > Thanks you very much > > Alexandre > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davclark at gmail.com Tue Mar 28 13:55:11 2017 From: davclark at gmail.com (Dav Clark) Date: Tue, 28 Mar 2017 13:55:11 -0400 Subject: [Neuroimaging] Using nibabel to merge 3D image pairs into single-file 4D Nifti1 In-Reply-To: References: Message-ID: Regarding archiving the mailing list, it'd actually be really easy to use this: https://www.mail-archive.com/faq.html#newlist I am almost certain that your proposed solution will work. I suppose this means the documentation could be improved, but I'm not sure how to address it. Maybe having a list with all functions and their short description on one page? I'll report back if there's a problem anyway. Otherwise, assume your solution worked. Thanks for your quick response! D On Fri, Mar 24, 2017 at 5:10 PM, Matthew Brett wrote: > Hi, > > On Fri, Mar 24, 2017 at 8:57 PM, Dav Clark wrote: > > So, this is a post on stack overflow: > > > > http://stackoverflow.com/questions/33737282/copy- > header-when-merging-multiple-mri-images-using-nibabel > > > > I'm doing something more or less equivalent: > > > > ``` > > sample_path = 'some_path/some_sub*.hdr' > > fnames = glob(sample_path) > > imgs = [nib.load(fn) for fn in fnames] > > > > img_data = np.stack([img.dataobj for img in imgs], axis=-1) > > > > converted = nib.Nifti1Image(img_data, imgs[0].affine, imgs[0].header) > > converted.to_filename('some_filename.nii.gz') > > ``` > > > > But the output ends up having a severely restricted range (thus > > rendering by default as black in fslview) as compared to fslmerge -t. > > fslmerge gets the TR right, and the nibabel version sets TR (pixdim4) > > to 1.0. Moreover imgs[0].header.get_slice_times() results in an error. > > > > Also, while the dimensions appear to be the same, I get different > > shaped time-series (as well as different absolute values, of course) > > from the two different conversion paths. I suspect I'm missing a > > scaling parameter from the files. > > > > For now, I am just using fslmerge -t, but since at least some other > > person out there has also done this (see so question linked above), I > > figure it's worth asking. > > > > If I could get this to work, it becomes very easy to implement in dask > > and take advantage of the spiffy computer I now have access to... > > Did you try: > > converted = nib.concat_images(fnames) > > ? I guess the correct TR and slice timing info is correctly set in the > first image header - I mean: > > first_header = nib.load(fnames[0]).header > > > ps - is there a searchable archive of the list? If not, I have a buddy > > who is pretty facile with mailman listservs... > > Actually, no, I don't think we have - would be very glad of help. > We're using the same python.org mailman hosting as scipy and numpy, so > a general solution would be generally useful. > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davclark at gmail.com Tue Mar 28 14:45:37 2017 From: davclark at gmail.com (Dav Clark) Date: Tue, 28 Mar 2017 14:45:37 -0400 Subject: [Neuroimaging] Using nibabel to merge 3D image pairs into single-file 4D Nifti1 In-Reply-To: References: Message-ID: Unfortunately, there do seem to be some minor differences. I don't know if they matter, but I'm unsettled that they are different at all... To clarify, I'm starting with nifti pair images for each TR, so I used your function, and then converted to Nifti1: converted = nib.concat_images(fnames) converted2 = nib.Nifti1Image.from_image(converted) This version, upon saving with .to_filename() comes up in fslview as totally black (the range is not calibrated for some reason - maybe there's an errant bright voxel somewhere?). Compared to the results from `fslmerge -t fnames`, setting the range the same for voxel intensities, I get the same images, and the same time-series shape. However, intensities are slightly off (even in the same TR) by values ranging from hundreds to thousands (the raw intensities are in 100,000's). So, usually < 1% I suspect. Is this worth getting to the bottom of? Happy to share some EPIs with you somewhere... by default I would use JHU Box. Best, Dav On Tue, Mar 28, 2017 at 1:55 PM, Dav Clark wrote: > Regarding archiving the mailing list, it'd actually be really easy to use > this: > > https://www.mail-archive.com/faq.html#newlist > > I am almost certain that your proposed solution will work. I suppose this > means the documentation could be improved, but I'm not sure how to address > it. Maybe having a list with all functions and their short description on > one page? > > I'll report back if there's a problem anyway. Otherwise, assume your > solution worked. > > Thanks for your quick response! > D > > > On Fri, Mar 24, 2017 at 5:10 PM, Matthew Brett > wrote: > >> Hi, >> >> On Fri, Mar 24, 2017 at 8:57 PM, Dav Clark wrote: >> > So, this is a post on stack overflow: >> > >> > http://stackoverflow.com/questions/33737282/copy-header- >> when-merging-multiple-mri-images-using-nibabel >> > >> > I'm doing something more or less equivalent: >> > >> > ``` >> > sample_path = 'some_path/some_sub*.hdr' >> > fnames = glob(sample_path) >> > imgs = [nib.load(fn) for fn in fnames] >> > >> > img_data = np.stack([img.dataobj for img in imgs], axis=-1) >> > >> > converted = nib.Nifti1Image(img_data, imgs[0].affine, imgs[0].header) >> > converted.to_filename('some_filename.nii.gz') >> > ``` >> > >> > But the output ends up having a severely restricted range (thus >> > rendering by default as black in fslview) as compared to fslmerge -t. >> > fslmerge gets the TR right, and the nibabel version sets TR (pixdim4) >> > to 1.0. Moreover imgs[0].header.get_slice_times() results in an error. >> > >> > Also, while the dimensions appear to be the same, I get different >> > shaped time-series (as well as different absolute values, of course) >> > from the two different conversion paths. I suspect I'm missing a >> > scaling parameter from the files. >> > >> > For now, I am just using fslmerge -t, but since at least some other >> > person out there has also done this (see so question linked above), I >> > figure it's worth asking. >> > >> > If I could get this to work, it becomes very easy to implement in dask >> > and take advantage of the spiffy computer I now have access to... >> >> Did you try: >> >> converted = nib.concat_images(fnames) >> >> ? I guess the correct TR and slice timing info is correctly set in the >> first image header - I mean: >> >> first_header = nib.load(fnames[0]).header >> >> > ps - is there a searchable archive of the list? If not, I have a buddy >> > who is pretty facile with mailman listservs... >> >> Actually, no, I don't think we have - would be very glad of help. >> We're using the same python.org mailman hosting as scipy and numpy, so >> a general solution would be generally useful. >> >> Cheers, >> >> Matthew >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Mar 28 16:47:30 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 28 Mar 2017 21:47:30 +0100 Subject: [Neuroimaging] Using nibabel to merge 3D image pairs into single-file 4D Nifti1 In-Reply-To: References: Message-ID: Hi, On Tue, Mar 28, 2017 at 7:45 PM, Dav Clark wrote: > Unfortunately, there do seem to be some minor differences. I don't know if > they matter, but I'm unsettled that they are different at all... > > To clarify, I'm starting with nifti pair images for each TR, so I used your > function, and then converted to Nifti1: > > converted = nib.concat_images(fnames) > converted2 = nib.Nifti1Image.from_image(converted) I'm sure you know, but you can always save directly to whatever format the extension implies, as in: converted = nib.concat_images(fnames) nib.save(converted, 'my_filename.nii') > This version, upon saving with .to_filename() comes up in fslview as totally > black (the range is not calibrated for some reason - maybe there's an errant > bright voxel somewhere?). Compared to the results from `fslmerge -t fnames`, > setting the range the same for voxel intensities, I get the same images, and > the same time-series shape. However, intensities are slightly off (even in > the same TR) by values ranging from hundreds to thousands (the raw > intensities are in 100,000's). So, usually < 1% I suspect. > > Is this worth getting to the bottom of? Happy to share some EPIs with you > somewhere... by default I would use JHU Box. Just to clarify - the result of `fslmerge` does not look totally black in fslview? Do the images from fslmerge and `nib.concat_images` have the same data range? What range do they have, when you load them into nibabel and do: data = img.get_data() print(data.min(), data.max()) ? Cheers, Matthew From davclark at gmail.com Tue Mar 28 17:22:42 2017 From: davclark at gmail.com (Dav Clark) Date: Tue, 28 Mar 2017 17:22:42 -0400 Subject: [Neuroimaging] Using nibabel to merge 3D image pairs into single-file 4D Nifti1 In-Reply-To: References: Message-ID: Yo! On Tue, Mar 28, 2017 at 4:47 PM, Matthew Brett wrote: > On Tue, Mar 28, 2017 at 7:45 PM, Dav Clark wrote: > > I'm sure you know, but you can always save directly to whatever format > the extension implies, as in: > > converted = nib.concat_images(fnames) > nib.save(converted, 'my_filename.nii') > I'm sure I could've figured that out. I appreciate your efforts to save me typing. > > This version, upon saving with .to_filename() comes up in fslview as > totally > > black (the range is not calibrated for some reason - maybe there's an > errant > > bright voxel somewhere?). Compared to the results from `fslmerge -t > fnames`, > > setting the range the same for voxel intensities, I get the same images, > and > > the same time-series shape. However, intensities are slightly off (even > in > > the same TR) by values ranging from hundreds to thousands (the raw > > intensities are in 100,000's). So, usually < 1% I suspect. > > > > Is this worth getting to the bottom of? Happy to share some EPIs with you > > somewhere... by default I would use JHU Box. > > Just to clarify - the result of `fslmerge` does not look totally black > in fslview? > That's correct. Looks real pretty-like, from the deepest black to the whitest white my monitor can display. > Do the images from fslmerge and `nib.concat_images` have the same data > range? What range do they have, when you load them into nibabel and > do: > > data = img.get_data() > print(data.min(), data.max()) > This was the ticket, I think. Min values are both 0. But max are 1253294.0009593964 for nib merged version, and 1253294.0 for the FSL-merged version. Turns out the dtypes are ' From matthew.brett at gmail.com Tue Mar 28 17:27:25 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 28 Mar 2017 22:27:25 +0100 Subject: [Neuroimaging] Using nibabel to merge 3D image pairs into single-file 4D Nifti1 In-Reply-To: References: Message-ID: Yo, On Tue, Mar 28, 2017 at 10:22 PM, Dav Clark wrote: > Yo! > > On Tue, Mar 28, 2017 at 4:47 PM, Matthew Brett > wrote: >> >> On Tue, Mar 28, 2017 at 7:45 PM, Dav Clark wrote: > > >> >> I'm sure you know, but you can always save directly to whatever format >> the extension implies, as in: >> >> converted = nib.concat_images(fnames) >> nib.save(converted, 'my_filename.nii') > > > I'm sure I could've figured that out. I appreciate your efforts to save me > typing. > >> >> > This version, upon saving with .to_filename() comes up in fslview as >> > totally >> > black (the range is not calibrated for some reason - maybe there's an >> > errant >> > bright voxel somewhere?). Compared to the results from `fslmerge -t >> > fnames`, >> > setting the range the same for voxel intensities, I get the same images, >> > and >> > the same time-series shape. However, intensities are slightly off (even >> > in >> > the same TR) by values ranging from hundreds to thousands (the raw >> > intensities are in 100,000's). So, usually < 1% I suspect. >> > >> > Is this worth getting to the bottom of? Happy to share some EPIs with >> > you >> > somewhere... by default I would use JHU Box. >> >> Just to clarify - the result of `fslmerge` does not look totally black >> in fslview? > > > That's correct. Looks real pretty-like, from the deepest black to the > whitest white my monitor can display. > >> >> Do the images from fslmerge and `nib.concat_images` have the same data >> range? What range do they have, when you load them into nibabel and >> do: >> >> data = img.get_data() >> print(data.min(), data.max()) > > > This was the ticket, I think. Min values are both 0. But max are > 1253294.0009593964 for nib merged version, and 1253294.0 for the FSL-merged > version. Turns out the dtypes are ' initial images also ' explain why the nib file is smaller (75MB vs 90MB). Neither of the > concatenated images has the exact same max in the first slice as the first > original image either. > > Anyway, I'm happy to leave it at that - I think I prefer the floating point > option, as it makes the image simpler to work with and compression makes the > size difference not so huge. My guess is that keeping the image in integer > format also limits accuracy, as I'm pretty sure Nifti1 implements a single > scaling constant for the whole image - 3D or 4D, and I guess we're > round-tripping through float64 already. You can always: converted.set_data_dtype('f4') before saving, if you want the floats in the nibabel output. Actually, if you do that, do you still see the all-black image in fslview? Cheers, Matthew From davclark at gmail.com Tue Mar 28 18:13:07 2017 From: davclark at gmail.com (Dav Clark) Date: Tue, 28 Mar 2017 18:13:07 -0400 Subject: [Neuroimaging] Using nibabel to merge 3D image pairs into single-file 4D Nifti1 In-Reply-To: References: Message-ID: That works - FSLview shows up sensibly scaled and a subtraction of the FSL data array from the f4 nibabel version yeilds all 0.0. Thanks Matthew, D On Tue, Mar 28, 2017 at 5:27 PM, Matthew Brett wrote: > Yo, > > On Tue, Mar 28, 2017 at 10:22 PM, Dav Clark wrote: > > Yo! > > > > On Tue, Mar 28, 2017 at 4:47 PM, Matthew Brett > > wrote: > >> > >> On Tue, Mar 28, 2017 at 7:45 PM, Dav Clark wrote: > > > > > >> > >> I'm sure you know, but you can always save directly to whatever format > >> the extension implies, as in: > >> > >> converted = nib.concat_images(fnames) > >> nib.save(converted, 'my_filename.nii') > > > > > > I'm sure I could've figured that out. I appreciate your efforts to save > me > > typing. > > > >> > >> > This version, upon saving with .to_filename() comes up in fslview as > >> > totally > >> > black (the range is not calibrated for some reason - maybe there's an > >> > errant > >> > bright voxel somewhere?). Compared to the results from `fslmerge -t > >> > fnames`, > >> > setting the range the same for voxel intensities, I get the same > images, > >> > and > >> > the same time-series shape. However, intensities are slightly off > (even > >> > in > >> > the same TR) by values ranging from hundreds to thousands (the raw > >> > intensities are in 100,000's). So, usually < 1% I suspect. > >> > > >> > Is this worth getting to the bottom of? Happy to share some EPIs with > >> > you > >> > somewhere... by default I would use JHU Box. > >> > >> Just to clarify - the result of `fslmerge` does not look totally black > >> in fslview? > > > > > > That's correct. Looks real pretty-like, from the deepest black to the > > whitest white my monitor can display. > > > >> > >> Do the images from fslmerge and `nib.concat_images` have the same data > >> range? What range do they have, when you load them into nibabel and > >> do: > >> > >> data = img.get_data() > >> print(data.min(), data.max()) > > > > > > This was the ticket, I think. Min values are both 0. But max are > > 1253294.0009593964 for nib merged version, and 1253294.0 for the > FSL-merged > > version. Turns out the dtypes are ' > initial images also ' would > > explain why the nib file is smaller (75MB vs 90MB). Neither of the > > concatenated images has the exact same max in the first slice as the > first > > original image either. > > > > Anyway, I'm happy to leave it at that - I think I prefer the floating > point > > option, as it makes the image simpler to work with and compression makes > the > > size difference not so huge. My guess is that keeping the image in > integer > > format also limits accuracy, as I'm pretty sure Nifti1 implements a > single > > scaling constant for the whole image - 3D or 4D, and I guess we're > > round-tripping through float64 already. > > You can always: > > converted.set_data_dtype('f4') > > before saving, if you want the floats in the nibabel output. > > Actually, if you do that, do you still see the all-black image in fslview? > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jetzel at wustl.edu Thu Mar 30 11:40:41 2017 From: jetzel at wustl.edu (Jo Etzel) Date: Thu, 30 Mar 2017 10:40:41 -0500 Subject: [Neuroimaging] FINAL call for papers & tutorials: PRNI (Pattern Recognition in NeuroImaging) Message-ID: <9ee2070f-1b37-d197-9926-ec420a71132b@wustl.edu> ******* please accept our apologies for cross-posting ******* ----------------------------------------------------------------------- PRNI 2017 FINAL CALL FOR PAPERS AND TUTORIALS MANUSCRIPT SUBMISSION DEADLINE 14 APRIL 2017 (FINAL EXTENSION) 7th International Workshop on Pattern Recognition in Neuroimaging to be held 21-23 June 2017 at the University of Toronto, Toronto, Canada www.prni.org - @PRNIworkshop - www.facebook.com/PRNIworkshop/ ----------------------------------------------------------------------- Pattern recognition techniques are an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. Authors should prepare full papers with a maximum length of 4 pages (two column IEEE style) for double-blind review. The manuscript submission deadline has been extended to 14 April 2017, 11:59 pm EST. This is the final extension; the deadline will not be moved again. Accepted manuscripts will be assigned either to an oral or poster sessions; all accepted manuscripts will be included in the workshop proceedings. Similarly to previous years, in addition to full length papers PRNI will also accept short abstracts (500 words excluding the title, abstract, tables, figure and data legends, and references) for poster presentation. Finally, this year PRNI has an open call for tutorial proposals. A tutorial can take a form of 2h, 4h or whole day event aimed at demonstrating a computational technique, software tool, or specific concept. Tutorial proposals featuring hands on demonstrations and promoting diversity (e.g. gender, background, institution) will be preferred. PRNI will cover conference registration fees for up to two tutors per accepted program. The submission deadline is also 14 April 2017, 11:59 pm EST. Please see www.prni.org for details, and follow @PRNIworkshop or www.facebook.com/PRNIworkshop/. -- Joset A. Etzel, Ph.D. Research Analyst Cognitive Control & Psychopathology Lab Washington University in St. Louis http://mvpa.blogspot.com/