From bcipolli at ucsd.edu Tue Sep 1 02:47:20 2015 From: bcipolli at ucsd.edu (Ben Cipollini) Date: Mon, 31 Aug 2015 17:47:20 -0700 Subject: [Neuroimaging] Should we prefer nii.gz or nii? In-Reply-To: References: Message-ID: Would it be hard to add some benchmarking info and recommendations to that webpage? I'm trying to learn Sphinx (it hasn't been very easy!), so if people think this is a good idea and can roughly sketch out what they want and how to push it into a Sphinx build, I'd be glad to try. Also, are there ways to make dataobj clearer across the nipy documentation? I haven't seen it used in any of the example code I've come across (e.g. nilearn, nipy, etc)... Ben On Mon, Aug 31, 2015 at 1:04 PM, Michael Waskom wrote: > In a little informal testing, indexing the dataobj appears a lot faster > than loading the data from a gzip file. Cool trick! > > On Mon, Aug 31, 2015 at 5:48 AM, Chris Filo Gorgolewski < > krzysztof.gorgolewski at gmail.com> wrote: > >> >> On Mon, Aug 31, 2015 at 3:59 PM, Matthew Brett >> wrote: >> >>> http://nipy.org/nibabel/images_and_memory.html#saving-time-and-memory >> >> >> Interesting - does this work equally well for .nii as well as .nii.gz? >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcipolli at ucsd.edu Tue Sep 1 23:25:20 2015 From: bcipolli at ucsd.edu (Ben Cipollini) Date: Tue, 1 Sep 2015 14:25:20 -0700 Subject: [Neuroimaging] 3d registration for data fusion in material science In-Reply-To: References: <03955F518517BD4DB8C6698E5ADAEC2A20C77D@EXDAG0-B1.intra.cea.fr> <03955F518517BD4DB8C6698E5ADAEC2A20C7FB@EXDAG0-B1.intra.cea.fr> <03955F518517BD4DB8C6698E5ADAEC2A20C86F@EXDAG0-B1.intra.cea.fr> Message-ID: Alexis, Thank you! I will try these out this week and follow up here or on github as needed. Ben On Mon, Aug 31, 2015 at 7:05 AM, Alexis Roche wrote: > Hi Ben, > > Updated examples can be found here: > https://github.com/nipy/nireg/tree/master/examples > > At the moment, you will be able to do functional/anatomical rigid > registration using nireg (see code sketch above), but it doesn't implement > nonrigid registration yet, so it's not recommended for whole-brain > subject-atlas realignment at the moment. > > > # Basic structural / functional rigid registration using nireg > > import nibabel as nb > import nireg as nr > > fun = nb.load('some_functional_image.nii') > anat = nb.load('some_T1w_image.nii') > > R = nr.HistogramRegistration(fun, anat) > T = R.optimize('rigid') > fun_t = nr.resample(fun, T.inv(), reference=anat) > > nb.save(funt_t, 'resampled_functional_image.nii') > Alexis > > On Mon, Aug 24, 2015 at 12:17 AM, Ben Cipollini wrote: > >> Alexis, >> >> I'm also interested in having this demo. Is this the right code-path to >> do functional/structural alignment within a subject as well as aligning >> subject and atlas data? >> >> Thanks, >> Ben >> >> On Fri, Aug 21, 2015 at 9:57 AM, Alexis Roche >> wrote: >> >>> Hi, >>> >>> I am realizing that the following example script is outdated as it still >>> relies on importing nipy: >>> >>> https://github.com/nipy/nireg/blob/master/examples/affine_registration.py >>> >>> I will submit an update in a few days. The registration code was >>> previously part of the nipy package, which we have recently decided to >>> split into several standalone packages, including nireg. The idea is that >>> nireg only needs nibabel to run (in addition to the standard numpy/scipy >>> packages). >>> >>> >>> > - how to convert a numpy array to nipy-like image ? >>> >>> This is done using nibabel, see in particular the Nifti1Image class. >>> >>> > - should the two images have the same size ? >>> >>> No. >>> >>> > - the version of nipy that I should use ? >>> >>> As mentioned above, the latest version of the code does not require >>> nipy. >>> >>> I will let you know when I push an up-to-date version of the affine >>> registration example script. >>> >>> Best, >>> >>> Alexis >>> >>> >>> > >>> > >>> > >>> > >>> > >>> > thank you, really, for your attention, this is kind. >>> > >>> > >>> > >>> > Gael >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > De : Neuroimaging [mailto:neuroimaging-bounces+gael.goret= >>> cea.fr at python.org] De la part de Alexis Roche >>> > Envoy? : vendredi 21 ao?t 2015 13:04 >>> > >>> > >>> > ? : Neuroimaging analysis in Python >>> > Objet : Re: [Neuroimaging] 3d registration for data fusion in material >>> science >>> > >>> > >>> > >>> > Hi Gael, >>> > >>> > Beside the stuff in dipy that Elef mentioned, there is another brain >>> image registration package that has slowly developed over the years, and >>> could be useful to you (although it still crucially lacks documentation): >>> > >>> > https://github.com/nipy/nireg >>> > >>> > >>> > >>> > This is rigid/affine registration only for the time being and has a >>> BSD license too. >>> > >>> > >>> > >>> > Best, >>> > >>> > Alexis >>> > >>> > >>> > >>> > On Fri, Aug 21, 2015 at 9:57 AM, GORET Gael 246279 >>> wrote: >>> > >>> > Hi all, >>> > >>> > >>> > >>> > Thanks for your quick replies, >>> > >>> > This project is just starting, and there is not too much materials ? >>> > >>> > However, to answer your questions, I just have created a public repo >>> on my github account : >>> > >>> > >>> > >>> > https://github.com/ggoret/MUDRA >>> > >>> > >>> > >>> > - In /doc, I have placed a summary (a pdf slideshow) of the >>> project containing some pictures (and info on instruments) >>> > >>> > >>> > >>> > - In /examples I have put two data volumes (npy binary file >>> format) the kind of data I need to register. >>> > >>> > >>> > >>> > - In /mudra a first (naive) try using the Fourier shell >>> correlation as metric >>> > >>> > And you can also find >>> > >>> > - a Cython wrapping of malik and perona?s algorithms >>> implementation working (pretty quickly) on 3D Volumes : >>> > >>> > o /mudra/extensions/non_linear_filtering.pyx >>> > >>> > >>> > >>> > - In /scripts you will find mainly converters simplifying I/O >>> > >>> > - In /tools 3 very nice visualization tools based on VTK (not >>> mayavi) working as standalone (npy input) : >>> > >>> > o elevation.py -> 2d image to 3d landscape >>> > >>> > o plan_interpolator.py -> 2d slicing of volume + isosurface >>> rendering for given seeds >>> > >>> > o scalar_field.py -> volume rendering (color and opacity gradient) >>> > >>> > >>> > >>> > Thanks in advance for your advices >>> > >>> > >>> > >>> > Cheers, >>> > >>> > >>> > >>> > Gael >>> > >>> > >>> > >>> > De : Neuroimaging [mailto:neuroimaging-bounces+gael.goret= >>> cea.fr at python.org] De la part de Eleftherios Garyfallidis >>> > Envoy? : jeudi 20 ao?t 2015 16:35 >>> > ? : Neuroimaging analysis in Python >>> > Objet : Re: [Neuroimaging] 3d registration for data fusion in material >>> science >>> > >>> > >>> > >>> > Hi Gael, >>> > >>> > >>> > >>> > Sounds exciting. There is no restriction to use our tools in other >>> libraries or domains. >>> > >>> > Look at these tutorials please (you need dipy development version) >>> > >>> > >>> > >>> > >>> https://github.com/nipy/dipy/blob/master/doc/examples/affine_registration_3d.py >>> > >>> > >>> https://github.com/nipy/dipy/blob/master/doc/examples/syn_registration_2d.py >>> > >>> > >>> https://github.com/nipy/dipy/blob/master/doc/examples/syn_registration_3d.py >>> > >>> > >>> > >>> > Do you have a github repo of your project? Do you have any example >>> pictures/volumes to show us that you would >>> > >>> > like to register? >>> > >>> > >>> > >>> > Cheers, >>> > >>> > Eleftherios >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > On Thu, Aug 20, 2015 at 9:31 AM, GORET Gael 246279 >>> wrote: >>> > >>> > Hi everybody, >>> > >>> > I am a (French) newcomer, working on the development of the data >>> fusion of Time-of-Flight Secondary Ion Mass Spectrometer (ToF-SIMS) and X >>> ray Nano-Tomography (XuM) data at the CEA (the French state institute for >>> energy, high-tech, etc.). >>> > >>> > In terms of samples, we?re a bit far from the neuro-fields ? Solid >>> oxide fuel cell, 3D chips, Si/Li matrix, etc., but in terms of methodology >>> I hope we have to share ? my project is to combine a chemical information >>> (from ToF-SIMS) with X-ray absorption (given by XuM) for a given volume (at >>> nano-scale). I am tackling a python module aiming to the registration of >>> 3d datasets. it seems that nipy includes a such possibility (an a lot more >>> ?). >>> > >>> > I?m a computer guy, mostly Pythonist and I wander if you would let me >>> transpose your code (mainly the registration part) to my problematic, I >>> would be very grateful for this. >>> > >>> > In this context would you have some advice for me ? >>> > >>> > I'm looking forward to hearing from you. >>> > >>> > Gael >>> > >>> > >>> > >>> > >>> > >>> > Dr. Ga?l Goret >>> > >>> > Chercheur Postdoctoral >>> > >>> > D?partement des Technologies Silicium >>> > >>> > Service de Caract?risation des Mat?riaux & Composants >>> > >>> > Commissariat ? l??nergie atomique et aux ?nergies alternatives >>> > >>> > MINATEC Campus | 17 rue des martyrs | F-38054 Grenoble Cedex >>> > >>> > T. +33 (0)4 38 78 49 29 | gael.goret at cea.fr >>> > >>> > >>> > >>> > >>> > _______________________________________________ >>> > Neuroimaging mailing list >>> > Neuroimaging at python.org >>> > https://mail.python.org/mailman/listinfo/neuroimaging >>> > >>> > >>> > >>> > >>> > _______________________________________________ >>> > Neuroimaging mailing list >>> > Neuroimaging at python.org >>> > https://mail.python.org/mailman/listinfo/neuroimaging >>> > >>> > >>> > >>> > >>> > -- >>> > >>> > Lead Clinical Research >>> > Advanced Clinical Imaging Technology >>> > Siemens/CHUV/EPFL >>> > 1015 Lausanne, Switzerland >>> > Phone: +41 21 545 9972 >>> > https://sites.google.com/site/alexisroche >>> > >>> > >>> > _______________________________________________ >>> > Neuroimaging mailing list >>> > Neuroimaging at python.org >>> > https://mail.python.org/mailman/listinfo/neuroimaging >>> > >>> >>> >>> >>> -- >>> Lead Clinical Research >>> Advanced Clinical Imaging Technology >>> Siemens/CHUV/EPFL >>> 1015 Lausanne, Switzerland >>> Phone: +41 21 545 9972 >>> https://sites.google.com/site/alexisroche >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > Lead Clinical Research > Advanced Clinical Imaging Technology > Siemens/CHUV/EPFL > 1015 Lausanne, Switzerland > Phone: +41 21 545 9972 > https://sites.google.com/site/alexisroche > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frakkopesto at gmail.com Tue Sep 1 23:44:41 2015 From: frakkopesto at gmail.com (Franco Pestilli) Date: Tue, 1 Sep 2015 17:44:41 -0400 Subject: [Neuroimaging] Postdoc Message-ID: <27D1E314-83CD-48B3-A29A-4B1698F676E7@gmail.com> Dear Community, We are looking for a talented individual to work on a large-scale neuroimaging project funded by the?Indiana CTSI . The project aims at applying cutting-edge neuroimaging and connectomics methods (Pestilli et al., Nature Methods 2014 ; Mi?i? et al., Neuron 2015 ) on the latest generation of data from Indiana Alzheimer Disease Center . The project is a collaboration between groups at Indiana? University Bloomington (Franco Pestilli and Olaf Sporns), Indiana University School of Medicine, Indianapolis (Andrew Saykin, Li Shen, and Yu-Chien, Wu) and Purdue University, Lafayette (Joaquin Go?i). The team has strong expertise in aging and Alzheimer?s disease, imaging genetics, network science, magnetic resonance imaging measurements and computational modeling. The ideal candidate will have a PhD in Computer Science, Engineering, Neuroscience, Informatics or Cognitive Science. Strong programming skills (e.g., in Python, MATLAB, or C/C++) and previous background in neuroimaging, computational modeling, or machine learning will be highly valued. The project will involve developing and publishing software products as well as contributing to ongoing software projects such as: https://francopestilli.github.io/life and https://sites.google.com/site/bctnet/ . Preference will be given to candidates demonstrating successful productivity by means of scientific articles and software publishing. Interested candidates should send CV, statement of research interests, and names of three references to Dr. Franco Pestilli franpest at indiana.edu . Review of the applications will start immediately and continue until the position is filled. Dr. Pestilli and Professor Sporns will be available at SfN in Chicago for informal meetings with potential candidates. Indiana University is an equal employment and affirmative action employer and a provider of ADA services. All qualified applicants will receive consideration for employment without regard to age, ethnicity, color, race, religion, sex, sexual orientation or identity, national origin, national origin, disability status or protected veteran status. Best regards, Franco Franco Pestilli, PhD Assistant Professor Psychology , Neuroscience and Cognitive Science Indiana Network Science Institute ? Indiana University, Bloomington , IN 47405 francopestilli.com | franpest at indiana.edu Phone: +1 (812) 856 9967 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcohen at polymtl.ca Tue Sep 1 04:23:11 2015 From: jcohen at polymtl.ca (Julien Cohen-Adad) Date: Mon, 31 Aug 2015 22:23:11 -0400 Subject: [Neuroimaging] dipy installation Message-ID: Hi, we are trying to find a quick way for installing dipy, for quick testing on Travis. Currently, when using Pip, it takes about 3 minutes to run the setup.py. We tried with easy_install, but the installation failed: https://travis-ci.org/neuropoly/spinalcordtoolbox/builds/78146296 would you have some suggestion? thanks, julien -- Julien Cohen-Adad, PhD Assistant Professor, Polytechnique Montreal Associate Director, Functional Neuroimaging Unit, University of Montreal Phone: 514 340 5121 (office: 2264); Skype: jcohenadad Web: www.neuro.polymtl.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Sep 2 07:42:39 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 2 Sep 2015 06:42:39 +0100 Subject: [Neuroimaging] dipy installation In-Reply-To: References: Message-ID: Hi, On Tue, Sep 1, 2015 at 3:23 AM, Julien Cohen-Adad wrote: > Hi, > > we are trying to find a quick way for installing dipy, for quick testing on > Travis. Currently, when using Pip, it takes about 3 minutes to run the > setup.py. > > We tried with easy_install, but the installation failed: > https://travis-ci.org/neuropoly/spinalcordtoolbox/builds/78146296 > > would you have some suggestion? I just built some wheels for dipy, that work on the travis setup. They are here: http://travis-wheels.scikit-image.org/ This should be very quick to install. Examples of installing from the wheels in this directory in dipy's own `.travis.yml` file, but let me know if it isn't clear what to do from there. Cheers, Matthew From bcipolli at ucsd.edu Wed Sep 2 17:50:48 2015 From: bcipolli at ucsd.edu (Ben Cipollini) Date: Wed, 2 Sep 2015 08:50:48 -0700 Subject: [Neuroimaging] Site Contributions In-Reply-To: References: Message-ID: This looks great. It's also cool to see "ask a question" integrated into the website, though we'll still have to hash things out for that. Rather than open a github issue (which few may see), I'll make a suggestion here: can we have the menu expanded by default when accessing via a full-size screen? It feels odd to me to have to expand the menu in order to navigate anywhere, by default. To me, navigation options should be obvious, unless pressed for space. On Sun, Aug 30, 2015 at 1:14 PM, vanessa sochat wrote: > Hi Everyone, > > I've finished work on the "contribute" page with my thinking for how to > contribute various content: > > http://vsoch.github.io/nipy-jekyll/contribute.html > > and as an example, put up a quick blog post: > > http://neuroimaging.tumblr.com/ > > The old "example" blog posts are hidden (private) and can be viewed if you > are a contributor. I'd like to add anyone and everyone interested to > contribute as "official" contributors via tumblr - if you are interested > please send me your email address. I've already invited some of you for > which I know the email. > > Best, > > Vanessa > > > -- > Vanessa Villamia Sochat > Stanford University > (603) 321-0676 > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Sep 2 18:31:03 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 2 Sep 2015 17:31:03 +0100 Subject: [Neuroimaging] Should we prefer nii.gz or nii? In-Reply-To: References: Message-ID: Hi, On Tue, Sep 1, 2015 at 1:47 AM, Ben Cipollini wrote: > Would it be hard to add some benchmarking info and recommendations to that > webpage? I'm trying to learn Sphinx (it hasn't been very easy!), so if > people think this is a good idea and can roughly sketch out what they want > and how to push it into a Sphinx build, I'd be glad to try. > > Also, are there ways to make dataobj clearer across the nipy documentation? > I haven't seen it used in any of the example code I've come across (e.g. > nilearn, nipy, etc)... Sorry, I'm afraid dataobj is pretty new, that's why we haven't been using it much so far. It also needs a good iterator wrapped round it. Brendan and I had some discussion of that in the original PR: https://github.com/nipy/nibabel/pull/211 I've opened an issue for that, because we should nail that down soon: https://github.com/nipy/nibabel/issues/344 Cheers, Matthew From jcohen at polymtl.ca Wed Sep 2 18:43:32 2015 From: jcohen at polymtl.ca (Julien Cohen-Adad) Date: Wed, 2 Sep 2015 12:43:32 -0400 Subject: [Neuroimaging] dipy installation In-Reply-To: References: Message-ID: Hi Matthew, fantastic! thank you for your quick reply and valuable help. julien On Wed, Sep 2, 2015 at 1:42 AM, Matthew Brett wrote: > Hi, > > On Tue, Sep 1, 2015 at 3:23 AM, Julien Cohen-Adad > wrote: > > Hi, > > > > we are trying to find a quick way for installing dipy, for quick testing > on > > Travis. Currently, when using Pip, it takes about 3 minutes to run the > > setup.py. > > > > We tried with easy_install, but the installation failed: > > https://travis-ci.org/neuropoly/spinalcordtoolbox/builds/78146296 > > > > would you have some suggestion? > > I just built some wheels for dipy, that work on the travis setup. > They are here: > > http://travis-wheels.scikit-image.org/ > > This should be very quick to install. Examples of installing from the > wheels in this directory in dipy's own `.travis.yml` file, but let me > know if it isn't clear what to do from there. > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vsochat at stanford.edu Wed Sep 2 18:51:15 2015 From: vsochat at stanford.edu (vanessa sochat) Date: Wed, 2 Sep 2015 09:51:15 -0700 Subject: [Neuroimaging] Site Contributions In-Reply-To: References: Message-ID: That's a good suggestion, and I'll add as an issue so I can find it easily when the next round of changes comes around. Generally, these "hamburger" menu things are useless functionality wise, and only serve to aesthetically "get rid of the navigation" to have a clean looking site. In our case, it is probably just annoying to have the extra clicks. On Wed, Sep 2, 2015 at 8:50 AM, Ben Cipollini wrote: > This looks great. It's also cool to see "ask a question" integrated into > the website, though we'll still have to hash things out for that. > > Rather than open a github issue (which few may see), I'll make a > suggestion here: can we have the menu expanded by default when accessing > via a full-size screen? > > It feels odd to me to have to expand the menu in order to navigate > anywhere, by default. To me, navigation options should be obvious, unless > pressed for space. > > > > On Sun, Aug 30, 2015 at 1:14 PM, vanessa sochat > wrote: > >> Hi Everyone, >> >> I've finished work on the "contribute" page with my thinking for how to >> contribute various content: >> >> http://vsoch.github.io/nipy-jekyll/contribute.html >> >> and as an example, put up a quick blog post: >> >> http://neuroimaging.tumblr.com/ >> >> The old "example" blog posts are hidden (private) and can be viewed if >> you are a contributor. I'd like to add anyone and everyone interested to >> contribute as "official" contributors via tumblr - if you are interested >> please send me your email address. I've already invited some of you for >> which I know the email. >> >> Best, >> >> Vanessa >> >> >> -- >> Vanessa Villamia Sochat >> Stanford University >> (603) 321-0676 >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Vanessa Villamia Sochat Stanford University (603) 321-0676 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcipolli at ucsd.edu Wed Sep 2 18:57:24 2015 From: bcipolli at ucsd.edu (Ben Cipollini) Date: Wed, 2 Sep 2015 09:57:24 -0700 Subject: [Neuroimaging] Site Contributions In-Reply-To: References: Message-ID: I think they're good for mobile sites. I think they're generally expanded by default on full screens, and compacted on mobile. So I suggest to keep it, but have a regular menu that's showing on full screens and hidden on mobile. On Wed, Sep 2, 2015 at 9:51 AM, vanessa sochat wrote: > That's a good suggestion, and I'll add as an issue so I can find it easily > when the next round of changes comes around. Generally, these "hamburger" > menu things are useless functionality wise, and only serve to aesthetically > "get rid of the navigation" to have a clean looking site. In our case, it > is probably just annoying to have the extra clicks. > > On Wed, Sep 2, 2015 at 8:50 AM, Ben Cipollini wrote: > >> This looks great. It's also cool to see "ask a question" integrated into >> the website, though we'll still have to hash things out for that. >> >> Rather than open a github issue (which few may see), I'll make a >> suggestion here: can we have the menu expanded by default when accessing >> via a full-size screen? >> >> It feels odd to me to have to expand the menu in order to navigate >> anywhere, by default. To me, navigation options should be obvious, unless >> pressed for space. >> >> >> >> On Sun, Aug 30, 2015 at 1:14 PM, vanessa sochat >> wrote: >> >>> Hi Everyone, >>> >>> I've finished work on the "contribute" page with my thinking for how to >>> contribute various content: >>> >>> http://vsoch.github.io/nipy-jekyll/contribute.html >>> >>> and as an example, put up a quick blog post: >>> >>> http://neuroimaging.tumblr.com/ >>> >>> The old "example" blog posts are hidden (private) and can be viewed if >>> you are a contributor. I'd like to add anyone and everyone interested to >>> contribute as "official" contributors via tumblr - if you are interested >>> please send me your email address. I've already invited some of you for >>> which I know the email. >>> >>> Best, >>> >>> Vanessa >>> >>> >>> -- >>> Vanessa Villamia Sochat >>> Stanford University >>> (603) 321-0676 >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > Vanessa Villamia Sochat > Stanford University > (603) 321-0676 > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Wed Sep 2 20:31:33 2015 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 2 Sep 2015 11:31:33 -0700 Subject: [Neuroimaging] Postdoc in neuroimaging and computation (post on behalf of Franco Pestilli) Message-ID: (with apologies for cross-posting) We are looking for a talented individual to work on a large-scale neuroimaging project funded by the Indiana CTSI . The project aims at applying cutting-edge neuroimaging and connectomics methods (Pestilli et al., Nature Methods 2014 ; Mi?i? et al., Neuron 2015 ) on the latest generation of data from the Indiana Alzheimer Disease Center . The project is a collaboration between groups at Indiana University Bloomington (Franco Pestilli and Olaf Sporns), Indiana University School of Medicine, Indianapolis (Andrew Saykin, Li Shen, and Yu-Chien, Wu) and Purdue University, Lafayette (Joaquin Go?i). The team has strong expertise in aging and Alzheimer?s disease, imaging genetics, network science, magnetic resonance imaging measurements and computational modeling. The ideal candidate will have a PhD in Computer Science, Engineering, Neuroscience, Informatics or Cognitive Science. Strong programming skills (e.g., in Python, MATLAB, or C/C++) and previous background in neuroimaging, computational modeling, or machine learning will be highly valued. The project will involve developing and publishing software products as well as contributing to ongoing software projects such as: https://francopestilli.github.io/life and https://sites.google.com/site/bctnet/. Preference will be given to candidates demonstrating successful productivity by means of scientific articles and software publishing. Interested candidates should send CV, statement of research interests, and names of three references to Dr. Franco Pestilli franpest at indiana.edu. Review of the applications will start immediately and continue until the position is filled. Dr. Pestilli and Professor Sporns will be available at SfN in Chicago for informal meetings with potential candidates. Indiana University is an equal employment and affirmative action employer and a provider of ADA services. All qualified applicants will receive consideration for employment without regard to age, ethnicity, color, race, religion, sex, sexual orientation or identity, national origin, national origin, disability status or protected veteran status. Best regards, Franco Franco Pestilli, PhD Assistant Professor Psychology , Neuroscience and Cognitive Science Indiana Network Science Institute ? Indiana University, Bloomington , IN 47405 francopestilli.com | franpest at indiana.edu Phone: +1 (812) 856 9967 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcipolli at ucsd.edu Wed Sep 2 22:41:54 2015 From: bcipolli at ucsd.edu (Ben Cipollini) Date: Wed, 2 Sep 2015 13:41:54 -0700 Subject: [Neuroimaging] Should we prefer nii.gz or nii? In-Reply-To: References: Message-ID: Ah, that's really helpful--thanks! Really interesting discussion there, glad I asked! Ben On Wed, Sep 2, 2015 at 9:31 AM, Matthew Brett wrote: > Hi, > > On Tue, Sep 1, 2015 at 1:47 AM, Ben Cipollini wrote: > > Would it be hard to add some benchmarking info and recommendations to > that > > webpage? I'm trying to learn Sphinx (it hasn't been very easy!), so if > > people think this is a good idea and can roughly sketch out what they > want > > and how to push it into a Sphinx build, I'd be glad to try. > > > > Also, are there ways to make dataobj clearer across the nipy > documentation? > > I haven't seen it used in any of the example code I've come across (e.g. > > nilearn, nipy, etc)... > > Sorry, I'm afraid dataobj is pretty new, that's why we haven't been > using it much so far. > > It also needs a good iterator wrapped round it. Brendan and I had > some discussion of that in the original PR: > > https://github.com/nipy/nibabel/pull/211 > > I've opened an issue for that, because we should nail that down soon: > > https://github.com/nipy/nibabel/issues/344 > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Thu Sep 3 02:13:06 2015 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 2 Sep 2015 17:13:06 -0700 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) Message-ID: Hi everyone, Jason and I are working on a port of his AFQ system ( https://github.com/jyeatman/afq) into dipy. We've started sketching out some notebooks on how that might work here: https://github.com/arokem/AFQ-notebooks The main thrust of this is in this one: https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb The first step in this process is to take a standard ROI of some part of the brain (say, corpus callosum, which is where we are starting) and warp it into the subject's individual brain through a non-linear registration between the individual brain and the template brain on which the ROI was defined (in this case MNI152). Registration works phenomenally (see cell 17), but because this is a non-linear registration, we find ourselves with some holes in the ROI after the transformation (see cell 27 for a sum-intensity projects). We are trying to use scipy.ndimage.binary_fill_holes to, well, fill these holes, but that doesn't seem to be working for us (cell 35 still has that hole...). Any ideas about what might be going wrong? Are we using fill_holes incorrectly? Any other tricks to do flood-filling in python? Should we be using skimage? Thanks! Ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Thu Sep 3 02:28:31 2015 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 2 Sep 2015 20:28:31 -0400 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: hi ariel, can you do nearest neighbor interpolation in `mapping.inverse_transform`? if your original ROI doesn't have holes and you are doing a diffeomorphic mapping, your target shouldn't have holes either. for a comparison you could run antsRegister and antsApplyTransforms, with nearest neighbor interpolation. cheers, satra On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem wrote: > Hi everyone, > > Jason and I are working on a port of his AFQ system ( > https://github.com/jyeatman/afq) into dipy. We've started sketching out > some notebooks on how that might work here: > > https://github.com/arokem/AFQ-notebooks > > The main thrust of this is in this one: > > > https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb > > The first step in this process is to take a standard ROI of some part of > the brain (say, corpus callosum, which is where we are starting) and warp > it into the subject's individual brain through a non-linear registration > between the individual brain and the template brain on which the ROI was > defined (in this case MNI152). Registration works phenomenally (see cell > 17), but because this is a non-linear registration, we find ourselves with > some holes in the ROI after the transformation (see cell 27 for a > sum-intensity projects). We are trying to use > scipy.ndimage.binary_fill_holes to, well, fill these holes, but that > doesn't seem to be working for us (cell 35 still has that hole...). > > Any ideas about what might be going wrong? Are we using fill_holes > incorrectly? Any other tricks to do flood-filling in python? Should we be > using skimage? > > Thanks! > > Ariel > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jomaroceguedag at gmail.com Thu Sep 3 03:16:14 2015 From: jomaroceguedag at gmail.com (Jesus-Omar Ocegueda-Gonzalez) Date: Wed, 2 Sep 2015 20:16:14 -0500 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: Hello guys!, I have been working on this issue for some days now (this is very interesting Ariel!, thanks for sharing your findings). Satra is totally right that **in theory** the transformations should preserve the topology. Unfortunately, the transformations are only **approximately** diffeomorphic. I am totally sure that this issue should be there in the original version of ants too (dipy's implementation is the same algorithm), although maybe the new version (antsRegistration) may have some improvements that I'm not aware of. Having said that, you can make the transforms closer to diffeomorphic by reducing the `step_length` parameter (in millimeters) from `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may try something about 0.15 mm. The objective is to avoid making very "aggressive" iterations, so another way to achieve this is by increasing the smoothing parameter from the CCMetric, the parameter is `sigma_diff`, which by default is 2.0, you may try something bout 3.0 (I would first try reducing the step size, though). I would like to try some other ideas, by any chance can you share the data (MNI_T2)? Thank you very much! -Omar. On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh wrote: > hi ariel, > > can you do nearest neighbor interpolation in `mapping.inverse_transform`? > if your original ROI doesn't have holes and you are doing a diffeomorphic > mapping, your target shouldn't have holes either. for a comparison you > could run antsRegister and antsApplyTransforms, with nearest neighbor > interpolation. > > cheers, > > satra > > On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem wrote: > >> Hi everyone, >> >> Jason and I are working on a port of his AFQ system ( >> https://github.com/jyeatman/afq) into dipy. We've started sketching out >> some notebooks on how that might work here: >> >> https://github.com/arokem/AFQ-notebooks >> >> The main thrust of this is in this one: >> >> >> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb >> >> The first step in this process is to take a standard ROI of some part of >> the brain (say, corpus callosum, which is where we are starting) and warp >> it into the subject's individual brain through a non-linear registration >> between the individual brain and the template brain on which the ROI was >> defined (in this case MNI152). Registration works phenomenally (see cell >> 17), but because this is a non-linear registration, we find ourselves with >> some holes in the ROI after the transformation (see cell 27 for a >> sum-intensity projects). We are trying to use >> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that >> doesn't seem to be working for us (cell 35 still has that hole...). >> >> Any ideas about what might be going wrong? Are we using fill_holes >> incorrectly? Any other tricks to do flood-filling in python? Should we be >> using skimage? >> >> Thanks! >> >> Ariel >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- "Cada quien es due?o de lo que calla y esclavo de lo que dice" -Proverbio chino. "We all are owners of what we keep silent and slaves of what we say" -Chinese proverb. http://www.cimat.mx/~omar -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Thu Sep 3 03:29:36 2015 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 2 Sep 2015 18:29:36 -0700 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: Hi Omar, Excellent - thanks so much for taking a look! I know that you are very busy these days, and so your attention on this is highly appreciated! I will try experimenting more with this, with different input parameters, as you suggested. If you also want to take a look, since #680 and #681 were merged into dipy, you can now run: import dipy.data as dpd MNI_T2 = dpd.read_mni_template() To get the template data. Thanks again, Ariel On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar Ocegueda-Gonzalez < jomaroceguedag at gmail.com> wrote: > Hello guys!, > I have been working on this issue for some days now (this is very > interesting Ariel!, thanks for sharing your findings). Satra is totally > right that **in theory** the transformations should preserve the topology. > Unfortunately, the transformations are only **approximately** > diffeomorphic. I am totally sure that this issue should be there in the > original version of ants too (dipy's implementation is the same algorithm), > although maybe the new version (antsRegistration) may have some > improvements that I'm not aware of. > > Having said that, you can make the transforms closer to diffeomorphic by > reducing the `step_length` parameter (in millimeters) from > `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may > try something about 0.15 mm. The objective is to avoid making very > "aggressive" iterations, so another way to achieve this is by increasing > the smoothing parameter from the CCMetric, the parameter is `sigma_diff`, > which by default is 2.0, you may try something bout 3.0 (I would first try > reducing the step size, though). > > I would like to try some other ideas, by any chance can you share the data > (MNI_T2)? > Thank you very much! > -Omar. > > > > On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh wrote: > >> hi ariel, >> >> can you do nearest neighbor interpolation in `mapping.inverse_transform`? >> if your original ROI doesn't have holes and you are doing a diffeomorphic >> mapping, your target shouldn't have holes either. for a comparison you >> could run antsRegister and antsApplyTransforms, with nearest neighbor >> interpolation. >> >> cheers, >> >> satra >> >> On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem wrote: >> >>> Hi everyone, >>> >>> Jason and I are working on a port of his AFQ system ( >>> https://github.com/jyeatman/afq) into dipy. We've started sketching out >>> some notebooks on how that might work here: >>> >>> https://github.com/arokem/AFQ-notebooks >>> >>> The main thrust of this is in this one: >>> >>> >>> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb >>> >>> The first step in this process is to take a standard ROI of some part of >>> the brain (say, corpus callosum, which is where we are starting) and warp >>> it into the subject's individual brain through a non-linear registration >>> between the individual brain and the template brain on which the ROI was >>> defined (in this case MNI152). Registration works phenomenally (see cell >>> 17), but because this is a non-linear registration, we find ourselves with >>> some holes in the ROI after the transformation (see cell 27 for a >>> sum-intensity projects). We are trying to use >>> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that >>> doesn't seem to be working for us (cell 35 still has that hole...). >>> >>> Any ideas about what might be going wrong? Are we using fill_holes >>> incorrectly? Any other tricks to do flood-filling in python? Should we be >>> using skimage? >>> >>> Thanks! >>> >>> Ariel >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > "Cada quien es due?o de lo que calla y esclavo de lo que dice" > -Proverbio chino. > "We all are owners of what we keep silent and slaves of what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jomaroceguedag at gmail.com Thu Sep 3 04:19:16 2015 From: jomaroceguedag at gmail.com (Jesus-Omar Ocegueda-Gonzalez) Date: Wed, 2 Sep 2015 21:19:16 -0500 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: Thanks Ariel, and don't worry, this is very related to the work I'm doing now, so this is actually very useful. I almost reproduced your experiment, by any chance can you share: LOCC_ni, ROCC_ni and midsag_ni? On Wed, Sep 2, 2015 at 8:29 PM, Ariel Rokem wrote: > Hi Omar, > > Excellent - thanks so much for taking a look! I know that you are very > busy these days, and so your attention on this is highly appreciated! I > will try experimenting more with this, with different input parameters, as > you suggested. > > If you also want to take a look, since #680 and #681 were merged into > dipy, you can now run: > > import dipy.data as dpd > MNI_T2 = dpd.read_mni_template() > > To get the template data. > > Thanks again, > > Ariel > > On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar Ocegueda-Gonzalez < > jomaroceguedag at gmail.com> wrote: > >> Hello guys!, >> I have been working on this issue for some days now (this is very >> interesting Ariel!, thanks for sharing your findings). Satra is totally >> right that **in theory** the transformations should preserve the topology. >> Unfortunately, the transformations are only **approximately** >> diffeomorphic. I am totally sure that this issue should be there in the >> original version of ants too (dipy's implementation is the same algorithm), >> although maybe the new version (antsRegistration) may have some >> improvements that I'm not aware of. >> >> Having said that, you can make the transforms closer to diffeomorphic by >> reducing the `step_length` parameter (in millimeters) from >> `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may >> try something about 0.15 mm. The objective is to avoid making very >> "aggressive" iterations, so another way to achieve this is by increasing >> the smoothing parameter from the CCMetric, the parameter is `sigma_diff`, >> which by default is 2.0, you may try something bout 3.0 (I would first try >> reducing the step size, though). >> >> I would like to try some other ideas, by any chance can you share the >> data (MNI_T2)? >> Thank you very much! >> -Omar. >> >> >> >> On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh wrote: >> >>> hi ariel, >>> >>> can you do nearest neighbor interpolation in >>> `mapping.inverse_transform`? if your original ROI doesn't have holes and >>> you are doing a diffeomorphic mapping, your target shouldn't have holes >>> either. for a comparison you could run antsRegister and >>> antsApplyTransforms, with nearest neighbor interpolation. >>> >>> cheers, >>> >>> satra >>> >>> On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem wrote: >>> >>>> Hi everyone, >>>> >>>> Jason and I are working on a port of his AFQ system ( >>>> https://github.com/jyeatman/afq) into dipy. We've started sketching >>>> out some notebooks on how that might work here: >>>> >>>> https://github.com/arokem/AFQ-notebooks >>>> >>>> The main thrust of this is in this one: >>>> >>>> >>>> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb >>>> >>>> The first step in this process is to take a standard ROI of some part >>>> of the brain (say, corpus callosum, which is where we are starting) and >>>> warp it into the subject's individual brain through a non-linear >>>> registration between the individual brain and the template brain on which >>>> the ROI was defined (in this case MNI152). Registration works phenomenally >>>> (see cell 17), but because this is a non-linear registration, we find >>>> ourselves with some holes in the ROI after the transformation (see cell 27 >>>> for a sum-intensity projects). We are trying to use >>>> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that >>>> doesn't seem to be working for us (cell 35 still has that hole...). >>>> >>>> Any ideas about what might be going wrong? Are we using fill_holes >>>> incorrectly? Any other tricks to do flood-filling in python? Should we be >>>> using skimage? >>>> >>>> Thanks! >>>> >>>> Ariel >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> >> -- >> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >> -Proverbio chino. >> "We all are owners of what we keep silent and slaves of what we say" >> -Chinese proverb. >> >> http://www.cimat.mx/~omar >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- "Cada quien es due?o de lo que calla y esclavo de lo que dice" -Proverbio chino. "We all are owners of what we keep silent and slaves of what we say" -Chinese proverb. http://www.cimat.mx/~omar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jomaroceguedag at gmail.com Thu Sep 3 04:39:24 2015 From: jomaroceguedag at gmail.com (Jesus-Omar Ocegueda-Gonzalez) Date: Wed, 2 Sep 2015 21:39:24 -0500 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: Actually, Ariel, nearest neighbor interpolation is a very unstable operation. If you interpolate at x or x+epsilon you may get different results for a very small epsilon, and discarding one single voxel may lead to a rejection of a large number of streamlines (I'm thinking about the boundary of the ROI too!, not only the "hole" ). I think it would be a more precise selection if you warped the streamlines to the template and select them there (now I see that we need that extension to the diffeomorphic map asap!). On Wed, Sep 2, 2015 at 9:19 PM, Jesus-Omar Ocegueda-Gonzalez < jomaroceguedag at gmail.com> wrote: > Thanks Ariel, and don't worry, this is very related to the work I'm doing > now, so this is actually very useful. I almost reproduced your experiment, > by any chance can you share: LOCC_ni, ROCC_ni and midsag_ni? > > On Wed, Sep 2, 2015 at 8:29 PM, Ariel Rokem wrote: > >> Hi Omar, >> >> Excellent - thanks so much for taking a look! I know that you are very >> busy these days, and so your attention on this is highly appreciated! I >> will try experimenting more with this, with different input parameters, as >> you suggested. >> >> If you also want to take a look, since #680 and #681 were merged into >> dipy, you can now run: >> >> import dipy.data as dpd >> MNI_T2 = dpd.read_mni_template() >> >> To get the template data. >> >> Thanks again, >> >> Ariel >> >> On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar Ocegueda-Gonzalez < >> jomaroceguedag at gmail.com> wrote: >> >>> Hello guys!, >>> I have been working on this issue for some days now (this is very >>> interesting Ariel!, thanks for sharing your findings). Satra is totally >>> right that **in theory** the transformations should preserve the topology. >>> Unfortunately, the transformations are only **approximately** >>> diffeomorphic. I am totally sure that this issue should be there in the >>> original version of ants too (dipy's implementation is the same algorithm), >>> although maybe the new version (antsRegistration) may have some >>> improvements that I'm not aware of. >>> >>> Having said that, you can make the transforms closer to diffeomorphic by >>> reducing the `step_length` parameter (in millimeters) from >>> `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may >>> try something about 0.15 mm. The objective is to avoid making very >>> "aggressive" iterations, so another way to achieve this is by increasing >>> the smoothing parameter from the CCMetric, the parameter is `sigma_diff`, >>> which by default is 2.0, you may try something bout 3.0 (I would first try >>> reducing the step size, though). >>> >>> I would like to try some other ideas, by any chance can you share the >>> data (MNI_T2)? >>> Thank you very much! >>> -Omar. >>> >>> >>> >>> On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh wrote: >>> >>>> hi ariel, >>>> >>>> can you do nearest neighbor interpolation in >>>> `mapping.inverse_transform`? if your original ROI doesn't have holes and >>>> you are doing a diffeomorphic mapping, your target shouldn't have holes >>>> either. for a comparison you could run antsRegister and >>>> antsApplyTransforms, with nearest neighbor interpolation. >>>> >>>> cheers, >>>> >>>> satra >>>> >>>> On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem wrote: >>>> >>>>> Hi everyone, >>>>> >>>>> Jason and I are working on a port of his AFQ system ( >>>>> https://github.com/jyeatman/afq) into dipy. We've started sketching >>>>> out some notebooks on how that might work here: >>>>> >>>>> https://github.com/arokem/AFQ-notebooks >>>>> >>>>> The main thrust of this is in this one: >>>>> >>>>> >>>>> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb >>>>> >>>>> The first step in this process is to take a standard ROI of some part >>>>> of the brain (say, corpus callosum, which is where we are starting) and >>>>> warp it into the subject's individual brain through a non-linear >>>>> registration between the individual brain and the template brain on which >>>>> the ROI was defined (in this case MNI152). Registration works phenomenally >>>>> (see cell 17), but because this is a non-linear registration, we find >>>>> ourselves with some holes in the ROI after the transformation (see cell 27 >>>>> for a sum-intensity projects). We are trying to use >>>>> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that >>>>> doesn't seem to be working for us (cell 35 still has that hole...). >>>>> >>>>> Any ideas about what might be going wrong? Are we using fill_holes >>>>> incorrectly? Any other tricks to do flood-filling in python? Should we be >>>>> using skimage? >>>>> >>>>> Thanks! >>>>> >>>>> Ariel >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> >>> -- >>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>> -Proverbio chino. >>> "We all are owners of what we keep silent and slaves of what we say" >>> -Chinese proverb. >>> >>> http://www.cimat.mx/~omar >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > "Cada quien es due?o de lo que calla y esclavo de lo que dice" > -Proverbio chino. > "We all are owners of what we keep silent and slaves of what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > -- "Cada quien es due?o de lo que calla y esclavo de lo que dice" -Proverbio chino. "We all are owners of what we keep silent and slaves of what we say" -Chinese proverb. http://www.cimat.mx/~omar -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Thu Sep 3 06:50:42 2015 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 2 Sep 2015 21:50:42 -0700 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: Hi Omar, The other ROIs are here: https://github.com/jyeatman/AFQ/tree/master/templates. I will think about the rest of your response tomorrow! Cheers, Ariel On Wed, Sep 2, 2015 at 7:39 PM, Jesus-Omar Ocegueda-Gonzalez < jomaroceguedag at gmail.com> wrote: > Actually, Ariel, nearest neighbor interpolation is a very unstable > operation. If you interpolate at x or x+epsilon you may get different > results for a very small epsilon, and discarding one single voxel may lead > to a rejection of a large number of streamlines (I'm thinking about the > boundary of the ROI too!, not only the "hole" ). I think it would be a more > precise selection if you warped the streamlines to the template and select > them there (now I see that we need that extension to the diffeomorphic map > asap!). > > On Wed, Sep 2, 2015 at 9:19 PM, Jesus-Omar Ocegueda-Gonzalez < > jomaroceguedag at gmail.com> wrote: > >> Thanks Ariel, and don't worry, this is very related to the work I'm doing >> now, so this is actually very useful. I almost reproduced your experiment, >> by any chance can you share: LOCC_ni, ROCC_ni and midsag_ni? >> >> On Wed, Sep 2, 2015 at 8:29 PM, Ariel Rokem wrote: >> >>> Hi Omar, >>> >>> Excellent - thanks so much for taking a look! I know that you are very >>> busy these days, and so your attention on this is highly appreciated! I >>> will try experimenting more with this, with different input parameters, as >>> you suggested. >>> >>> If you also want to take a look, since #680 and #681 were merged into >>> dipy, you can now run: >>> >>> import dipy.data as dpd >>> MNI_T2 = dpd.read_mni_template() >>> >>> To get the template data. >>> >>> Thanks again, >>> >>> Ariel >>> >>> On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar Ocegueda-Gonzalez < >>> jomaroceguedag at gmail.com> wrote: >>> >>>> Hello guys!, >>>> I have been working on this issue for some days now (this is very >>>> interesting Ariel!, thanks for sharing your findings). Satra is totally >>>> right that **in theory** the transformations should preserve the topology. >>>> Unfortunately, the transformations are only **approximately** >>>> diffeomorphic. I am totally sure that this issue should be there in the >>>> original version of ants too (dipy's implementation is the same algorithm), >>>> although maybe the new version (antsRegistration) may have some >>>> improvements that I'm not aware of. >>>> >>>> Having said that, you can make the transforms closer to diffeomorphic >>>> by reducing the `step_length` parameter (in millimeters) from >>>> `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may >>>> try something about 0.15 mm. The objective is to avoid making very >>>> "aggressive" iterations, so another way to achieve this is by increasing >>>> the smoothing parameter from the CCMetric, the parameter is `sigma_diff`, >>>> which by default is 2.0, you may try something bout 3.0 (I would first try >>>> reducing the step size, though). >>>> >>>> I would like to try some other ideas, by any chance can you share the >>>> data (MNI_T2)? >>>> Thank you very much! >>>> -Omar. >>>> >>>> >>>> >>>> On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh wrote: >>>> >>>>> hi ariel, >>>>> >>>>> can you do nearest neighbor interpolation in >>>>> `mapping.inverse_transform`? if your original ROI doesn't have holes and >>>>> you are doing a diffeomorphic mapping, your target shouldn't have holes >>>>> either. for a comparison you could run antsRegister and >>>>> antsApplyTransforms, with nearest neighbor interpolation. >>>>> >>>>> cheers, >>>>> >>>>> satra >>>>> >>>>> On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem wrote: >>>>> >>>>>> Hi everyone, >>>>>> >>>>>> Jason and I are working on a port of his AFQ system ( >>>>>> https://github.com/jyeatman/afq) into dipy. We've started sketching >>>>>> out some notebooks on how that might work here: >>>>>> >>>>>> https://github.com/arokem/AFQ-notebooks >>>>>> >>>>>> The main thrust of this is in this one: >>>>>> >>>>>> >>>>>> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb >>>>>> >>>>>> The first step in this process is to take a standard ROI of some part >>>>>> of the brain (say, corpus callosum, which is where we are starting) and >>>>>> warp it into the subject's individual brain through a non-linear >>>>>> registration between the individual brain and the template brain on which >>>>>> the ROI was defined (in this case MNI152). Registration works phenomenally >>>>>> (see cell 17), but because this is a non-linear registration, we find >>>>>> ourselves with some holes in the ROI after the transformation (see cell 27 >>>>>> for a sum-intensity projects). We are trying to use >>>>>> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that >>>>>> doesn't seem to be working for us (cell 35 still has that hole...). >>>>>> >>>>>> Any ideas about what might be going wrong? Are we using fill_holes >>>>>> incorrectly? Any other tricks to do flood-filling in python? Should we be >>>>>> using skimage? >>>>>> >>>>>> Thanks! >>>>>> >>>>>> Ariel >>>>>> >>>>>> _______________________________________________ >>>>>> Neuroimaging mailing list >>>>>> Neuroimaging at python.org >>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>> >>>> >>>> -- >>>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>>> -Proverbio chino. >>>> "We all are owners of what we keep silent and slaves of what we say" >>>> -Chinese proverb. >>>> >>>> http://www.cimat.mx/~omar >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> >> -- >> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >> -Proverbio chino. >> "We all are owners of what we keep silent and slaves of what we say" >> -Chinese proverb. >> >> http://www.cimat.mx/~omar >> > > > > -- > "Cada quien es due?o de lo que calla y esclavo de lo que dice" > -Proverbio chino. > "We all are owners of what we keep silent and slaves of what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.langen at erasmusmc.nl Thu Sep 3 11:22:27 2015 From: c.langen at erasmusmc.nl (C.D. Langen) Date: Thu, 3 Sep 2015 09:22:27 +0000 Subject: [Neuroimaging] nibabel.trackvis.read error Message-ID: <55E81150.7000903@erasmusmc.nl> Greetings, When I try to run the following line of code: streams, hdr = nib.trackvis.read(os.path.join(subjDir, 'dti.trk'), points_space='voxel') I get the following error, but only for a small subset of subjects: File "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", line 223, in read streamlines = list(streamlines) File "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", line 202, in track_gen buffer = pts_str) TypeError: buffer is too small for requested array Someone else had a similar error (http://mail.scipy.org/pipermail/nipy-devel/2012-March/007272.html) which they resolved by using nibabel from github. I tried this, but got the same error. All subjects' trackvis files were produced in exactly the same way using Trackvis, so I am not sure why only a few subjects fail while others succeed. Any ideas? Thank you in advance for your help in resolving this issue. Regards, Carolyn Langen From jomaroceguedag at gmail.com Thu Sep 3 17:24:25 2015 From: jomaroceguedag at gmail.com (Jesus-Omar Ocegueda-Gonzalez) Date: Thu, 3 Sep 2015 10:24:25 -0500 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: Hi Ariel, I checked the transformation and everything seems to be correct. The hole is an interpolation artifact. The problem is that the ROI is too thin (only 2 voxels), if you visualize the warped voxels as points in 3D you can see that there is no actual "hole", but it is generated by removing two voxels from the boundary of two different warped slices (a valid result from a diffeomorphic map). To illustrate this, I dilated the mask along the x-axis by one voxel (now the thickness of the mask is 3 voxels) and warped the dilated mask with the same transform, after doing this, I get this ROI: [image: Inline image 1] Anyway, the discussion about the transforms being only **approximately** diffeomorphic is still valid but it is not a problem in this particular case. On Wed, Sep 2, 2015 at 11:50 PM, Ariel Rokem wrote: > Hi Omar, > > The other ROIs are here: > https://github.com/jyeatman/AFQ/tree/master/templates. I will think about > the rest of your response tomorrow! > > Cheers, > > Ariel > > On Wed, Sep 2, 2015 at 7:39 PM, Jesus-Omar Ocegueda-Gonzalez < > jomaroceguedag at gmail.com> wrote: > >> Actually, Ariel, nearest neighbor interpolation is a very unstable >> operation. If you interpolate at x or x+epsilon you may get different >> results for a very small epsilon, and discarding one single voxel may lead >> to a rejection of a large number of streamlines (I'm thinking about the >> boundary of the ROI too!, not only the "hole" ). I think it would be a more >> precise selection if you warped the streamlines to the template and select >> them there (now I see that we need that extension to the diffeomorphic map >> asap!). >> >> On Wed, Sep 2, 2015 at 9:19 PM, Jesus-Omar Ocegueda-Gonzalez < >> jomaroceguedag at gmail.com> wrote: >> >>> Thanks Ariel, and don't worry, this is very related to the work I'm >>> doing now, so this is actually very useful. I almost reproduced your >>> experiment, by any chance can you share: LOCC_ni, ROCC_ni and midsag_ni? >>> >>> On Wed, Sep 2, 2015 at 8:29 PM, Ariel Rokem wrote: >>> >>>> Hi Omar, >>>> >>>> Excellent - thanks so much for taking a look! I know that you are very >>>> busy these days, and so your attention on this is highly appreciated! I >>>> will try experimenting more with this, with different input parameters, as >>>> you suggested. >>>> >>>> If you also want to take a look, since #680 and #681 were merged into >>>> dipy, you can now run: >>>> >>>> import dipy.data as dpd >>>> MNI_T2 = dpd.read_mni_template() >>>> >>>> To get the template data. >>>> >>>> Thanks again, >>>> >>>> Ariel >>>> >>>> On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar Ocegueda-Gonzalez < >>>> jomaroceguedag at gmail.com> wrote: >>>> >>>>> Hello guys!, >>>>> I have been working on this issue for some days now (this is very >>>>> interesting Ariel!, thanks for sharing your findings). Satra is totally >>>>> right that **in theory** the transformations should preserve the topology. >>>>> Unfortunately, the transformations are only **approximately** >>>>> diffeomorphic. I am totally sure that this issue should be there in the >>>>> original version of ants too (dipy's implementation is the same algorithm), >>>>> although maybe the new version (antsRegistration) may have some >>>>> improvements that I'm not aware of. >>>>> >>>>> Having said that, you can make the transforms closer to diffeomorphic >>>>> by reducing the `step_length` parameter (in millimeters) from >>>>> `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may >>>>> try something about 0.15 mm. The objective is to avoid making very >>>>> "aggressive" iterations, so another way to achieve this is by increasing >>>>> the smoothing parameter from the CCMetric, the parameter is `sigma_diff`, >>>>> which by default is 2.0, you may try something bout 3.0 (I would first try >>>>> reducing the step size, though). >>>>> >>>>> I would like to try some other ideas, by any chance can you share the >>>>> data (MNI_T2)? >>>>> Thank you very much! >>>>> -Omar. >>>>> >>>>> >>>>> >>>>> On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh wrote: >>>>> >>>>>> hi ariel, >>>>>> >>>>>> can you do nearest neighbor interpolation in >>>>>> `mapping.inverse_transform`? if your original ROI doesn't have holes and >>>>>> you are doing a diffeomorphic mapping, your target shouldn't have holes >>>>>> either. for a comparison you could run antsRegister and >>>>>> antsApplyTransforms, with nearest neighbor interpolation. >>>>>> >>>>>> cheers, >>>>>> >>>>>> satra >>>>>> >>>>>> On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem wrote: >>>>>> >>>>>>> Hi everyone, >>>>>>> >>>>>>> Jason and I are working on a port of his AFQ system ( >>>>>>> https://github.com/jyeatman/afq) into dipy. We've started sketching >>>>>>> out some notebooks on how that might work here: >>>>>>> >>>>>>> https://github.com/arokem/AFQ-notebooks >>>>>>> >>>>>>> The main thrust of this is in this one: >>>>>>> >>>>>>> >>>>>>> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb >>>>>>> >>>>>>> The first step in this process is to take a standard ROI of some >>>>>>> part of the brain (say, corpus callosum, which is where we are starting) >>>>>>> and warp it into the subject's individual brain through a non-linear >>>>>>> registration between the individual brain and the template brain on which >>>>>>> the ROI was defined (in this case MNI152). Registration works phenomenally >>>>>>> (see cell 17), but because this is a non-linear registration, we find >>>>>>> ourselves with some holes in the ROI after the transformation (see cell 27 >>>>>>> for a sum-intensity projects). We are trying to use >>>>>>> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that >>>>>>> doesn't seem to be working for us (cell 35 still has that hole...). >>>>>>> >>>>>>> Any ideas about what might be going wrong? Are we using fill_holes >>>>>>> incorrectly? Any other tricks to do flood-filling in python? Should we be >>>>>>> using skimage? >>>>>>> >>>>>>> Thanks! >>>>>>> >>>>>>> Ariel >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Neuroimaging mailing list >>>>>>> Neuroimaging at python.org >>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Neuroimaging mailing list >>>>>> Neuroimaging at python.org >>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>>>> -Proverbio chino. >>>>> "We all are owners of what we keep silent and slaves of what we say" >>>>> -Chinese proverb. >>>>> >>>>> http://www.cimat.mx/~omar >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> >>> -- >>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>> -Proverbio chino. >>> "We all are owners of what we keep silent and slaves of what we say" >>> -Chinese proverb. >>> >>> http://www.cimat.mx/~omar >>> >> >> >> >> -- >> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >> -Proverbio chino. >> "We all are owners of what we keep silent and slaves of what we say" >> -Chinese proverb. >> >> http://www.cimat.mx/~omar >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- "Cada quien es due?o de lo que calla y esclavo de lo que dice" -Proverbio chino. "We all are owners of what we keep silent and slaves of what we say" -Chinese proverb. http://www.cimat.mx/~omar -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 4623 bytes Desc: not available URL: From jomaroceguedag at gmail.com Thu Sep 3 17:26:40 2015 From: jomaroceguedag at gmail.com (Jesus-Omar Ocegueda-Gonzalez) Date: Thu, 3 Sep 2015 10:26:40 -0500 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: Oh! here is the code: https://github.com/omarocegueda/notebooks/blob/master/ariel_tests.py On Thu, Sep 3, 2015 at 10:24 AM, Jesus-Omar Ocegueda-Gonzalez < jomaroceguedag at gmail.com> wrote: > Hi Ariel, > I checked the transformation and everything seems to be correct. The hole > is an interpolation artifact. The problem is that the ROI is too thin (only > 2 voxels), if you visualize the warped voxels as points in 3D you can see > that there is no actual "hole", but it is generated by removing two voxels > from the boundary of two different warped slices (a valid result from a > diffeomorphic map). To illustrate this, I dilated the mask along the x-axis > by one voxel (now the thickness of the mask is 3 voxels) and warped the > dilated mask with the same transform, after doing this, I get this ROI: > [image: Inline image 1] > Anyway, the discussion about the transforms being only **approximately** > diffeomorphic is still valid but it is not a problem in this particular > case. > > > On Wed, Sep 2, 2015 at 11:50 PM, Ariel Rokem wrote: > >> Hi Omar, >> >> The other ROIs are here: >> https://github.com/jyeatman/AFQ/tree/master/templates. I will think >> about the rest of your response tomorrow! >> >> Cheers, >> >> Ariel >> >> On Wed, Sep 2, 2015 at 7:39 PM, Jesus-Omar Ocegueda-Gonzalez < >> jomaroceguedag at gmail.com> wrote: >> >>> Actually, Ariel, nearest neighbor interpolation is a very unstable >>> operation. If you interpolate at x or x+epsilon you may get different >>> results for a very small epsilon, and discarding one single voxel may lead >>> to a rejection of a large number of streamlines (I'm thinking about the >>> boundary of the ROI too!, not only the "hole" ). I think it would be a more >>> precise selection if you warped the streamlines to the template and select >>> them there (now I see that we need that extension to the diffeomorphic map >>> asap!). >>> >>> On Wed, Sep 2, 2015 at 9:19 PM, Jesus-Omar Ocegueda-Gonzalez < >>> jomaroceguedag at gmail.com> wrote: >>> >>>> Thanks Ariel, and don't worry, this is very related to the work I'm >>>> doing now, so this is actually very useful. I almost reproduced your >>>> experiment, by any chance can you share: LOCC_ni, ROCC_ni and >>>> midsag_ni? >>>> >>>> On Wed, Sep 2, 2015 at 8:29 PM, Ariel Rokem wrote: >>>> >>>>> Hi Omar, >>>>> >>>>> Excellent - thanks so much for taking a look! I know that you are very >>>>> busy these days, and so your attention on this is highly appreciated! I >>>>> will try experimenting more with this, with different input parameters, as >>>>> you suggested. >>>>> >>>>> If you also want to take a look, since #680 and #681 were merged into >>>>> dipy, you can now run: >>>>> >>>>> import dipy.data as dpd >>>>> MNI_T2 = dpd.read_mni_template() >>>>> >>>>> To get the template data. >>>>> >>>>> Thanks again, >>>>> >>>>> Ariel >>>>> >>>>> On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar Ocegueda-Gonzalez < >>>>> jomaroceguedag at gmail.com> wrote: >>>>> >>>>>> Hello guys!, >>>>>> I have been working on this issue for some days now (this is very >>>>>> interesting Ariel!, thanks for sharing your findings). Satra is totally >>>>>> right that **in theory** the transformations should preserve the topology. >>>>>> Unfortunately, the transformations are only **approximately** >>>>>> diffeomorphic. I am totally sure that this issue should be there in the >>>>>> original version of ants too (dipy's implementation is the same algorithm), >>>>>> although maybe the new version (antsRegistration) may have some >>>>>> improvements that I'm not aware of. >>>>>> >>>>>> Having said that, you can make the transforms closer to diffeomorphic >>>>>> by reducing the `step_length` parameter (in millimeters) from >>>>>> `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may >>>>>> try something about 0.15 mm. The objective is to avoid making very >>>>>> "aggressive" iterations, so another way to achieve this is by increasing >>>>>> the smoothing parameter from the CCMetric, the parameter is `sigma_diff`, >>>>>> which by default is 2.0, you may try something bout 3.0 (I would first try >>>>>> reducing the step size, though). >>>>>> >>>>>> I would like to try some other ideas, by any chance can you share the >>>>>> data (MNI_T2)? >>>>>> Thank you very much! >>>>>> -Omar. >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh wrote: >>>>>> >>>>>>> hi ariel, >>>>>>> >>>>>>> can you do nearest neighbor interpolation in >>>>>>> `mapping.inverse_transform`? if your original ROI doesn't have holes and >>>>>>> you are doing a diffeomorphic mapping, your target shouldn't have holes >>>>>>> either. for a comparison you could run antsRegister and >>>>>>> antsApplyTransforms, with nearest neighbor interpolation. >>>>>>> >>>>>>> cheers, >>>>>>> >>>>>>> satra >>>>>>> >>>>>>> On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem >>>>>>> wrote: >>>>>>> >>>>>>>> Hi everyone, >>>>>>>> >>>>>>>> Jason and I are working on a port of his AFQ system ( >>>>>>>> https://github.com/jyeatman/afq) into dipy. We've started >>>>>>>> sketching out some notebooks on how that might work here: >>>>>>>> >>>>>>>> https://github.com/arokem/AFQ-notebooks >>>>>>>> >>>>>>>> The main thrust of this is in this one: >>>>>>>> >>>>>>>> >>>>>>>> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb >>>>>>>> >>>>>>>> The first step in this process is to take a standard ROI of some >>>>>>>> part of the brain (say, corpus callosum, which is where we are starting) >>>>>>>> and warp it into the subject's individual brain through a non-linear >>>>>>>> registration between the individual brain and the template brain on which >>>>>>>> the ROI was defined (in this case MNI152). Registration works phenomenally >>>>>>>> (see cell 17), but because this is a non-linear registration, we find >>>>>>>> ourselves with some holes in the ROI after the transformation (see cell 27 >>>>>>>> for a sum-intensity projects). We are trying to use >>>>>>>> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that >>>>>>>> doesn't seem to be working for us (cell 35 still has that hole...). >>>>>>>> >>>>>>>> Any ideas about what might be going wrong? Are we using fill_holes >>>>>>>> incorrectly? Any other tricks to do flood-filling in python? Should we be >>>>>>>> using skimage? >>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> Ariel >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Neuroimaging mailing list >>>>>>>> Neuroimaging at python.org >>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Neuroimaging mailing list >>>>>>> Neuroimaging at python.org >>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>>>>> -Proverbio chino. >>>>>> "We all are owners of what we keep silent and slaves of what we say" >>>>>> -Chinese proverb. >>>>>> >>>>>> http://www.cimat.mx/~omar >>>>>> >>>>>> _______________________________________________ >>>>>> Neuroimaging mailing list >>>>>> Neuroimaging at python.org >>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>> >>>> >>>> -- >>>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>>> -Proverbio chino. >>>> "We all are owners of what we keep silent and slaves of what we say" >>>> -Chinese proverb. >>>> >>>> http://www.cimat.mx/~omar >>>> >>> >>> >>> >>> -- >>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>> -Proverbio chino. >>> "We all are owners of what we keep silent and slaves of what we say" >>> -Chinese proverb. >>> >>> http://www.cimat.mx/~omar >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > "Cada quien es due?o de lo que calla y esclavo de lo que dice" > -Proverbio chino. > "We all are owners of what we keep silent and slaves of what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > -- "Cada quien es due?o de lo que calla y esclavo de lo que dice" -Proverbio chino. "We all are owners of what we keep silent and slaves of what we say" -Chinese proverb. http://www.cimat.mx/~omar -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 4623 bytes Desc: not available URL: From matthew.brett at gmail.com Thu Sep 3 19:31:36 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 3 Sep 2015 18:31:36 +0100 Subject: [Neuroimaging] nibabel.trackvis.read error In-Reply-To: <55E81150.7000903@erasmusmc.nl> References: <55E81150.7000903@erasmusmc.nl> Message-ID: Hi, On Thu, Sep 3, 2015 at 10:22 AM, C.D. Langen wrote: > Greetings, > > When I try to run the following line of code: > > streams, hdr = nib.trackvis.read(os.path.join(subjDir, 'dti.trk'), > points_space='voxel') > > I get the following error, but only for a small subset of subjects: > > File > "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", > line 223, in read > streamlines = list(streamlines) > File > "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", > line 202, in track_gen > buffer = pts_str) > TypeError: buffer is too small for requested array > > > Someone else had a similar error > (http://mail.scipy.org/pipermail/nipy-devel/2012-March/007272.html) > which they resolved by using nibabel from github. I tried this, but got > the same error. > > All subjects' trackvis files were produced in exactly the same way using > Trackvis, so I am not sure why only a few subjects fail while others > succeed. Any ideas? > > Thank you in advance for your help in resolving this issue. Thanks for the report - would you mind put the file online somewhere so we can have a look? Best, Matthew From arokem at gmail.com Thu Sep 3 22:42:35 2015 From: arokem at gmail.com (Ariel Rokem) Date: Thu, 3 Sep 2015 13:42:35 -0700 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: This is great - I think that ultimately the solution very well be to expand the ROI, as you propose. But I am not still not sure that I follow your reasoning: are you saying that what it only seems like a hole, but there is actually no hole in the resulting deformed ROI? Note that I am summing across that dimension in displaying the mask in cell 27 of the notebook I shared. As I understand it, the only way there could be a 0-valued voxel in the middle of the ROI is if there is a topological hole in the ROI. Which leads me back to my original question: if I have an object represented as a binary mask in a 3D array and I wonder whether it's topologically a torus or a sphere, how do I go about calculating that? Furthermore - why does ndimage.fill_holes not seem to fill that hole? (maybe there's no hole? Is that what you meant?). Thanks again! Ariel On Thu, Sep 3, 2015 at 8:24 AM, Jesus-Omar Ocegueda-Gonzalez < jomaroceguedag at gmail.com> wrote: > Hi Ariel, > I checked the transformation and everything seems to be correct. The hole > is an interpolation artifact. The problem is that the ROI is too thin (only > 2 voxels), if you visualize the warped voxels as points in 3D you can see > that there is no actual "hole", but it is generated by removing two voxels > from the boundary of two different warped slices (a valid result from a > diffeomorphic map). To illustrate this, I dilated the mask along the x-axis > by one voxel (now the thickness of the mask is 3 voxels) and warped the > dilated mask with the same transform, after doing this, I get this ROI: > [image: Inline image 1] > Anyway, the discussion about the transforms being only **approximately** > diffeomorphic is still valid but it is not a problem in this particular > case. > > > On Wed, Sep 2, 2015 at 11:50 PM, Ariel Rokem wrote: > >> Hi Omar, >> >> The other ROIs are here: >> https://github.com/jyeatman/AFQ/tree/master/templates. I will think >> about the rest of your response tomorrow! >> >> Cheers, >> >> Ariel >> >> On Wed, Sep 2, 2015 at 7:39 PM, Jesus-Omar Ocegueda-Gonzalez < >> jomaroceguedag at gmail.com> wrote: >> >>> Actually, Ariel, nearest neighbor interpolation is a very unstable >>> operation. If you interpolate at x or x+epsilon you may get different >>> results for a very small epsilon, and discarding one single voxel may lead >>> to a rejection of a large number of streamlines (I'm thinking about the >>> boundary of the ROI too!, not only the "hole" ). I think it would be a more >>> precise selection if you warped the streamlines to the template and select >>> them there (now I see that we need that extension to the diffeomorphic map >>> asap!). >>> >>> On Wed, Sep 2, 2015 at 9:19 PM, Jesus-Omar Ocegueda-Gonzalez < >>> jomaroceguedag at gmail.com> wrote: >>> >>>> Thanks Ariel, and don't worry, this is very related to the work I'm >>>> doing now, so this is actually very useful. I almost reproduced your >>>> experiment, by any chance can you share: LOCC_ni, ROCC_ni and >>>> midsag_ni? >>>> >>>> On Wed, Sep 2, 2015 at 8:29 PM, Ariel Rokem wrote: >>>> >>>>> Hi Omar, >>>>> >>>>> Excellent - thanks so much for taking a look! I know that you are very >>>>> busy these days, and so your attention on this is highly appreciated! I >>>>> will try experimenting more with this, with different input parameters, as >>>>> you suggested. >>>>> >>>>> If you also want to take a look, since #680 and #681 were merged into >>>>> dipy, you can now run: >>>>> >>>>> import dipy.data as dpd >>>>> MNI_T2 = dpd.read_mni_template() >>>>> >>>>> To get the template data. >>>>> >>>>> Thanks again, >>>>> >>>>> Ariel >>>>> >>>>> On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar Ocegueda-Gonzalez < >>>>> jomaroceguedag at gmail.com> wrote: >>>>> >>>>>> Hello guys!, >>>>>> I have been working on this issue for some days now (this is very >>>>>> interesting Ariel!, thanks for sharing your findings). Satra is totally >>>>>> right that **in theory** the transformations should preserve the topology. >>>>>> Unfortunately, the transformations are only **approximately** >>>>>> diffeomorphic. I am totally sure that this issue should be there in the >>>>>> original version of ants too (dipy's implementation is the same algorithm), >>>>>> although maybe the new version (antsRegistration) may have some >>>>>> improvements that I'm not aware of. >>>>>> >>>>>> Having said that, you can make the transforms closer to diffeomorphic >>>>>> by reducing the `step_length` parameter (in millimeters) from >>>>>> `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may >>>>>> try something about 0.15 mm. The objective is to avoid making very >>>>>> "aggressive" iterations, so another way to achieve this is by increasing >>>>>> the smoothing parameter from the CCMetric, the parameter is `sigma_diff`, >>>>>> which by default is 2.0, you may try something bout 3.0 (I would first try >>>>>> reducing the step size, though). >>>>>> >>>>>> I would like to try some other ideas, by any chance can you share the >>>>>> data (MNI_T2)? >>>>>> Thank you very much! >>>>>> -Omar. >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh wrote: >>>>>> >>>>>>> hi ariel, >>>>>>> >>>>>>> can you do nearest neighbor interpolation in >>>>>>> `mapping.inverse_transform`? if your original ROI doesn't have holes and >>>>>>> you are doing a diffeomorphic mapping, your target shouldn't have holes >>>>>>> either. for a comparison you could run antsRegister and >>>>>>> antsApplyTransforms, with nearest neighbor interpolation. >>>>>>> >>>>>>> cheers, >>>>>>> >>>>>>> satra >>>>>>> >>>>>>> On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem >>>>>>> wrote: >>>>>>> >>>>>>>> Hi everyone, >>>>>>>> >>>>>>>> Jason and I are working on a port of his AFQ system ( >>>>>>>> https://github.com/jyeatman/afq) into dipy. We've started >>>>>>>> sketching out some notebooks on how that might work here: >>>>>>>> >>>>>>>> https://github.com/arokem/AFQ-notebooks >>>>>>>> >>>>>>>> The main thrust of this is in this one: >>>>>>>> >>>>>>>> >>>>>>>> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb >>>>>>>> >>>>>>>> The first step in this process is to take a standard ROI of some >>>>>>>> part of the brain (say, corpus callosum, which is where we are starting) >>>>>>>> and warp it into the subject's individual brain through a non-linear >>>>>>>> registration between the individual brain and the template brain on which >>>>>>>> the ROI was defined (in this case MNI152). Registration works phenomenally >>>>>>>> (see cell 17), but because this is a non-linear registration, we find >>>>>>>> ourselves with some holes in the ROI after the transformation (see cell 27 >>>>>>>> for a sum-intensity projects). We are trying to use >>>>>>>> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that >>>>>>>> doesn't seem to be working for us (cell 35 still has that hole...). >>>>>>>> >>>>>>>> Any ideas about what might be going wrong? Are we using fill_holes >>>>>>>> incorrectly? Any other tricks to do flood-filling in python? Should we be >>>>>>>> using skimage? >>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> Ariel >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Neuroimaging mailing list >>>>>>>> Neuroimaging at python.org >>>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Neuroimaging mailing list >>>>>>> Neuroimaging at python.org >>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>>>>> -Proverbio chino. >>>>>> "We all are owners of what we keep silent and slaves of what we say" >>>>>> -Chinese proverb. >>>>>> >>>>>> http://www.cimat.mx/~omar >>>>>> >>>>>> _______________________________________________ >>>>>> Neuroimaging mailing list >>>>>> Neuroimaging at python.org >>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>> >>>> >>>> -- >>>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>>> -Proverbio chino. >>>> "We all are owners of what we keep silent and slaves of what we say" >>>> -Chinese proverb. >>>> >>>> http://www.cimat.mx/~omar >>>> >>> >>> >>> >>> -- >>> "Cada quien es due?o de lo que calla y esclavo de lo que dice" >>> -Proverbio chino. >>> "We all are owners of what we keep silent and slaves of what we say" >>> -Chinese proverb. >>> >>> http://www.cimat.mx/~omar >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > "Cada quien es due?o de lo que calla y esclavo de lo que dice" > -Proverbio chino. > "We all are owners of what we keep silent and slaves of what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 4623 bytes Desc: not available URL: From bobd at stanford.edu Thu Sep 3 23:07:22 2015 From: bobd at stanford.edu (Bob Dougherty) Date: Thu, 3 Sep 2015 14:07:22 -0700 Subject: [Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?) In-Reply-To: References: Message-ID: <55E8B68A.1020208@stanford.edu> A 'hole' in 3d is a voxel fully surrounded by non-zero voxels. If the projection to 2d shows a 'hole', then it's not a hole in the 3-d sense, but rather a worm-hole that connects to the outside of the object (in this case, straight through along the projection axis). So 3d hole-filling won't fix it. You could try dilate-erode (i.e., image closing), but that will remove some detail. I can see how such things would be created by a non-linear deformation, especially with a very thin ROI. E.g., imagine that the deformation field piches together your CC ROI in that region, pushing the left and right bounds of the ROI very close together. Then when you interpolate, some voxels in the pinched region might disappear. I think avoiding ROIs with very thin structures would be wise. And maybe a little image closing as well. Also, are you using trilinear interpolation followed by a threshold to get back to a binary image? On 09/03/2015 01:42 PM, Ariel Rokem wrote: > This is great - I think that ultimately the solution very well be to > expand the ROI, as you propose. > > But I am not still not sure that I follow your reasoning: are you > saying that what it only seems like a hole, but there is actually no > hole in the resulting deformed ROI? Note that I am summing across that > dimension in displaying the mask in cell 27 of the notebook I shared. > As I understand it, the only way there could be a 0-valued voxel in > the middle of the ROI is if there is a topological hole in the ROI. > > Which leads me back to my original question: if I have an object > represented as a binary mask in a 3D array and I wonder whether it's > topologically a torus or a sphere, how do I go about calculating that? > Furthermore - why does ndimage.fill_holes not seem to fill that hole? > (maybe there's no hole? Is that what you meant?). > > Thanks again! > > Ariel > > > On Thu, Sep 3, 2015 at 8:24 AM, Jesus-Omar Ocegueda-Gonzalez > > wrote: > > Hi Ariel, > I checked the transformation and everything seems to be correct. > The hole is an interpolation artifact. The problem is that the ROI > is too thin (only 2 voxels), if you visualize the warped voxels as > points in 3D you can see that there is no actual "hole", but it is > generated by removing two voxels from the boundary of two > different warped slices (a valid result from a diffeomorphic map). > To illustrate this, I dilated the mask along the x-axis by one > voxel (now the thickness of the mask is 3 voxels) and warped the > dilated mask with the same transform, after doing this, I get this > ROI: > Inline image 1 > Anyway, the discussion about the transforms being only > **approximately** diffeomorphic is still valid but it is not a > problem in this particular case. > > > On Wed, Sep 2, 2015 at 11:50 PM, Ariel Rokem > wrote: > > Hi Omar, > > The other ROIs are here: > https://github.com/jyeatman/AFQ/tree/master/templates. I will > think about the rest of your response tomorrow! > > Cheers, > > Ariel > > On Wed, Sep 2, 2015 at 7:39 PM, Jesus-Omar Ocegueda-Gonzalez > > > wrote: > > Actually, Ariel, nearest neighbor interpolation is a very > unstable operation. If you interpolate at x or x+epsilon > you may get different results for a very small epsilon, > and discarding one single voxel may lead to a rejection of > a large number of streamlines (I'm thinking about the > boundary of the ROI too!, not only the "hole" ). I think > it would be a more precise selection if you warped the > streamlines to the template and select them there (now I > see that we need that extension to the diffeomorphic map > asap!). > > On Wed, Sep 2, 2015 at 9:19 PM, Jesus-Omar > Ocegueda-Gonzalez > wrote: > > Thanks Ariel, and don't worry, this is very related to > the work I'm doing now, so this is actually very > useful. I almost reproduced your experiment, by any > chance can you share: LOCC_ni, ROCC_ni and midsag_ni? > > On Wed, Sep 2, 2015 at 8:29 PM, Ariel Rokem > > wrote: > > Hi Omar, > > Excellent - thanks so much for taking a look! I > know that you are very busy these days, and so > your attention on this is highly appreciated! I > will try experimenting more with this, with > different input parameters, as you suggested. > > If you also want to take a look, since #680 and > #681 were merged into dipy, you can now run: > > import dipy.data as dpd > MNI_T2 = dpd.read_mni_template() > > To get the template data. > > Thanks again, > > Ariel > > On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar > Ocegueda-Gonzalez > wrote: > > Hello guys!, > I have been working on this issue for some > days now (this is very interesting Ariel!, > thanks for sharing your findings). Satra is > totally right that **in theory** the > transformations should preserve the topology. > Unfortunately, the transformations are only > **approximately** diffeomorphic. I am totally > sure that this issue should be there in the > original version of ants too (dipy's > implementation is the same algorithm), > although maybe the new version > (antsRegistration) may have some improvements > that I'm not aware of. > > Having said that, you can make the transforms > closer to diffeomorphic by reducing the > `step_length` parameter (in millimeters) from > `SymmetricDiffeomorphicRegistration`, which by > default is 0.25 mm. You may try something > about 0.15 mm. The objective is to avoid > making very "aggressive" iterations, so > another way to achieve this is by increasing > the smoothing parameter from the CCMetric, the > parameter is `sigma_diff`, which by default is > 2.0, you may try something bout 3.0 (I would > first try reducing the step size, though). > > I would like to try some other ideas, by any > chance can you share the data (MNI_T2)? > Thank you very much! > -Omar. > > > > On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh > > wrote: > > hi ariel, > > can you do nearest neighbor interpolation > in `mapping.inverse_transform`? if your > original ROI doesn't have holes and you > are doing a diffeomorphic mapping, your > target shouldn't have holes either. for a > comparison you could run antsRegister and > antsApplyTransforms, with nearest neighbor > interpolation. > > cheers, > > satra > > On Wed, Sep 2, 2015 at 8:13 PM, Ariel > Rokem > wrote: > > Hi everyone, > > Jason and I are working on a port of > his AFQ system > (https://github.com/jyeatman/afq) into > dipy. We've started sketching out some > notebooks on how that might work here: > > https://github.com/arokem/AFQ-notebooks > > The main thrust of this is in this one: > > https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb > > The first step in this process is to > take a standard ROI of some part of > the brain (say, corpus callosum, which > is where we are starting) and warp it > into the subject's individual brain > through a non-linear registration > between the individual brain and the > template brain on which the ROI was > defined (in this case MNI152). > Registration works phenomenally (see > cell 17), but because this is a > non-linear registration, we find > ourselves with some holes in the ROI > after the transformation (see cell 27 > for a sum-intensity projects). We are > trying to use > scipy.ndimage.binary_fill_holes to, > well, fill these holes, but that > doesn't seem to be working for us > (cell 35 still has that hole...). > > Any ideas about what might be going > wrong? Are we using fill_holes > incorrectly? Any other tricks to do > flood-filling in python? Should we be > using skimage? > > Thanks! > > Ariel > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > -- > "Cada quien es due?o de lo que calla y esclavo > de lo que dice" > -Proverbio chino. > "We all are owners of what we keep silent and > slaves of what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > -- > "Cada quien es due?o de lo que calla y esclavo de lo > que dice" > -Proverbio chino. > "We all are owners of what we keep silent and slaves > of what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > > > > > -- > "Cada quien es due?o de lo que calla y esclavo de lo que dice" > -Proverbio chino. > "We all are owners of what we keep silent and slaves of > what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > -- > "Cada quien es due?o de lo que calla y esclavo de lo que dice" > -Proverbio chino. > "We all are owners of what we keep silent and slaves of what we say" > -Chinese proverb. > > http://www.cimat.mx/~omar > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Robert F. Dougherty, PhD Research Director Stanford Center for Cognitive and Neurobiological Imaging 70 Jordan Hall * Stanford CA 94305 * 650-725-0051 http://www.stanford.edu/~bobd -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 4623 bytes Desc: not available URL: From c.langen at erasmusmc.nl Fri Sep 4 13:27:02 2015 From: c.langen at erasmusmc.nl (C.D. Langen) Date: Fri, 4 Sep 2015 11:27:02 +0000 Subject: [Neuroimaging] nibabel.trackvis.read error In-Reply-To: References: <55E81150.7000903@erasmusmc.nl> Message-ID: <55E98001.7070902@erasmusmc.nl> Hi Matthew, Thank you for your quick reply. Below are links to two datasets, one that failed to be read by nib.trackvis, and one that succeeded. Both can be viewed in Trackvis: https://dl.dropboxusercontent.com/u/57089115/fail.trk https://dl.dropboxusercontent.com/u/57089115/succeed.trk Best, Carolyn On 03-09-15 19:31, Matthew Brett wrote: > Hi, > > On Thu, Sep 3, 2015 at 10:22 AM, C.D. Langen wrote: >> Greetings, >> >> When I try to run the following line of code: >> >> streams, hdr = nib.trackvis.read(os.path.join(subjDir, 'dti.trk'), >> points_space='voxel') >> >> I get the following error, but only for a small subset of subjects: >> >> File >> "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", >> line 223, in read >> streamlines = list(streamlines) >> File >> "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", >> line 202, in track_gen >> buffer = pts_str) >> TypeError: buffer is too small for requested array >> >> >> Someone else had a similar error >> (http://mail.scipy.org/pipermail/nipy-devel/2012-March/007272.html) >> which they resolved by using nibabel from github. I tried this, but got >> the same error. >> >> All subjects' trackvis files were produced in exactly the same way using >> Trackvis, so I am not sure why only a few subjects fail while others >> succeed. Any ideas? >> >> Thank you in advance for your help in resolving this issue. > Thanks for the report - would you mind put the file online somewhere > so we can have a look? > > Best, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging From arfanakis at iit.edu Fri Sep 4 17:51:25 2015 From: arfanakis at iit.edu (Konstantinos Arfanakis) Date: Fri, 4 Sep 2015 10:51:25 -0500 Subject: [Neuroimaging] Tractography normalization In-Reply-To: References: <540B96B0-AFEE-41E4-A655-0B48594F4DAF@iit.edu> Message-ID: Hi JB, As you mentioned, one clause of the license for the IIT Human Brain Atlas says that "You may not alter, transform, adapt or build upon the information, images or data?. However, the purpose of this clause is solely for protection of the work (suggested by the IIT lawyers who prepared the license). It is not intended to pose restrictions to the community, as we hope that our colleagues will use this resource as a basis for further development. Therefore, we urge whoever is interested for example in ?building upon? the IIT Human Brain Atlas to contact us and request a license that gives permission for their particular project. We have already done this with folks at Harvard as well as a company in California. The process was very fast. This can be done not only for faculty or companies, but also for students at any level, postdocs, and whoever else is interested. This is not a burden to us. We want to ensure that our publicly accessible resource is useful for further advancing neuroimaging technologies. Regards, Konstantinos -- Konstantinos Arfanakis, Ph.D. Professor Director, MRI Research Department of Biomedical Engineering Medical Imaging Research Center Illinois Institute of Technology Leader, Imaging and Bioengineering Studies Rush Alzheimer's Disease Center Rush University Medical Center www.iit.edu/~mri www.twitter.com/MRIatIIT office: (312) 567-3864 fax: (312) 567-3225 > On Aug 12, 2015, at 6:59 PM, JB Poline wrote: > > Hi Konstantinos, > > Say I want to derive a new atlas from IIT. Can I redistribute my > derived atlas freely ? > > cheers > JB > > On Tue, Aug 4, 2015 at 11:23 PM, Konstantinos Arfanakis > wrote: >> Hi JB, >> >> It would help me answer your question if you could give me a very general >> description of what you are planning to generate and how would that be used. >> >> Regards, >> Konstantinos >> >> >> >> On Aug 5, 2015, at 4:54 AM, Eleftherios Garyfallidis >> wrote: >> >> Hi JB, >> >> I am not familiar with the restrictions of the license. I think that >> Konstantinos is the best person to answer this question. I am cc'ing him. >> >> You may want to continue this discussion in a different thread as it is a >> bit off topic. >> >> Cheers, >> Eleftherios >> >> On Tue, Aug 4, 2015 at 9:42 PM, JB Poline wrote: >>> >>> Hi, >>> >>> I was curious about the IIT license: does anyone understand >>> >>> " >>> (2) You may not alter, transform, adapt or build upon the information, >>> images or data; >>> " >>> >>> so : "build upon the information" is a bit vague : I guess you cannot >>> create an atlas or anything from it, you have to use it as it is ? That >>> seems bad - but may be I missed something ? >>> >>> cheers >>> >>> JB >>> >>> >>> >>> On Tue, Aug 4, 2015 at 6:25 PM, Eleftherios Garyfallidis >>> wrote: >>>> >>>> Hello, >>>> >>>> On Tue, Aug 4, 2015 at 7:21 PM, Jorge Rudas wrote: >>>>> >>>>> Thanks for your answer Eleftherios >>>>> >>>>> One questions more... >>>>> >>>> Be happy to ask as many questions as you need until everything is clear. >>>> I am sure you will need >>>> feedback from us to perform such an analysis. That is because although we >>>> are currently working on making >>>> easy workflows, right now you will need write your own scripts combining >>>> different DIPY tutorials of the >>>> development version. >>>> >>>> Of course I am more than happy to help you with this. >>>> >>>>> >>>>> When you say "then apply the deformation fields to the tractographies", >>>>> what exactly does this mean ? >>>>> >>>> You will generate streamlines and FA maps in the native space of every >>>> subject. Then you can for example register >>>> the FAs to an FA template. After you have performed this registrations >>>> you will also have saved the deformation fields >>>> which were applied to the FAs so that they can be registered to the FA >>>> template. Because the tractographies were >>>> in the same space (native) as the FAs the same deformation fields can be >>>> used to warp them to the FA template space >>>> and in that way your tractographies will also be normalized. >>>> >>>> Cheers, >>>> Eleftherios >>>> >>>> p.s. As an FA template I recommend using the IIT atlas. >>>> >>>>> >>>>> Regards >>>>> >>>>> >>>>> Jorge Rudas >>>>> >>>>> >>>>> 2015-08-04 15:55 GMT-05:00 Eleftherios Garyfallidis >>>>> : >>>>>> >>>>>> Hi Jorge, >>>>>> >>>>>> This is a very active research area in DIPY. There are currently two >>>>>> ways: >>>>>> >>>>>> a) You can register FA images together using our image registration >>>>>> functions and then apply the deformation fields to the tractographies. >>>>>> >>>>>> b) Segment bundles from the tractographies (manually or automatically) >>>>>> and register them directly using the SLR. Paper here >>>>>> >>>>>> http://www.ncbi.nlm.nih.gov/pubmed/25987367 >>>>>> >>>>>> Cheers, >>>>>> Eleftherios >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Aug 4, 2015 at 4:14 PM, Jorge Rudas >>>>>> wrote: >>>>>>> >>>>>>> Hi everyone >>>>>>> >>>>>>> Any suggestions for to spatial normalization of tractographys?. I want >>>>>>> compare tractographys at population level. >>>>>>> >>>>>>> regards, >>>>>>> >>>>>>> Jorge Rudas >>>>>>> National University of Colombia >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Neuroimaging mailing list >>>>>>> Neuroimaging at python.org >>>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Neuroimaging mailing list >>>>>> Neuroimaging at python.org >>>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Sep 4 20:28:09 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 4 Sep 2015 19:28:09 +0100 Subject: [Neuroimaging] Tractography normalization In-Reply-To: References: <540B96B0-AFEE-41E4-A655-0B48594F4DAF@iit.edu> Message-ID: Hi, On Fri, Sep 4, 2015 at 4:51 PM, Konstantinos Arfanakis wrote: > Hi JB, > > As you mentioned, one clause of the license for the IIT Human Brain Atlas > says that "You may not alter, transform, adapt or build upon the > information, images or data?. However, the purpose of this clause is solely > for protection of the work (suggested by the IIT lawyers who prepared the > license). That's very unfortunate, as I think you will find your work will be considerably less used as a result. What protection were you hoping for? Best, Matthew From matthew.brett at gmail.com Sat Sep 5 00:18:21 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 4 Sep 2015 23:18:21 +0100 Subject: [Neuroimaging] nibabel.trackvis.read error In-Reply-To: <55E98001.7070902@erasmusmc.nl> References: <55E81150.7000903@erasmusmc.nl> <55E98001.7070902@erasmusmc.nl> Message-ID: Hi, On Fri, Sep 4, 2015 at 12:27 PM, C.D. Langen wrote: > Hi Matthew, > > Thank you for your quick reply. Below are links to two datasets, one > that failed to be read by nib.trackvis, and one that succeeded. Both can > be viewed in Trackvis: > > https://dl.dropboxusercontent.com/u/57089115/fail.trk > https://dl.dropboxusercontent.com/u/57089115/succeed.trk > > Best, > Carolyn > > On 03-09-15 19:31, Matthew Brett wrote: >> Hi, >> >> On Thu, Sep 3, 2015 at 10:22 AM, C.D. Langen wrote: >>> Greetings, >>> >>> When I try to run the following line of code: >>> >>> streams, hdr = nib.trackvis.read(os.path.join(subjDir, 'dti.trk'), >>> points_space='voxel') >>> >>> I get the following error, but only for a small subset of subjects: >>> >>> File >>> "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", >>> line 223, in read >>> streamlines = list(streamlines) >>> File >>> "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", >>> line 202, in track_gen >>> buffer = pts_str) >>> TypeError: buffer is too small for requested array >>> >>> >>> Someone else had a similar error >>> (http://mail.scipy.org/pipermail/nipy-devel/2012-March/007272.html) >>> which they resolved by using nibabel from github. I tried this, but got >>> the same error. >>> >>> All subjects' trackvis files were produced in exactly the same way using >>> Trackvis, so I am not sure why only a few subjects fail while others >>> succeed. Any ideas? >>> >>> Thank you in advance for your help in resolving this issue. >> Thanks for the report - would you mind put the file online somewhere >> so we can have a look? What seems to be happening is that the last track in the file is truncated. It says that it is 120 points long (n_pts field), but there is only data in the file for 77 points. I tried reading the file with this MATLAB toolbox : https://github.com/johncolby/along-tract-stats >> [header, tracks] = trk_read('fail.trk'); >> tracks(end) ans = nPoints: 120 matrix: [77x3 single] >> tracks(end-1) ans = nPoints: 91 matrix: [91x3 single] Note that the last track has 120 'nPoints' but only 77 points. The previous track has 91 'nPoints' and 91 points, which is what I would expect. So I think the file is mal-formed and trackvis is being more generous than nibabel. I think nibabel should have a mode where it passes through this kind of thing. In the meantime, if you want to read all but the last shortened track, you could do something like this: import nibabel as nib track_gen, hdr = nib.trackvis.read('fail.trk', as_generator=True) tracks = [] while True: try: track = next(track_gen) except (StopIteration, TypeError): break tracks.append(track) Cheers, Matthew From matthew.brett at gmail.com Sat Sep 5 01:06:24 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 5 Sep 2015 00:06:24 +0100 Subject: [Neuroimaging] iteraxis API - we need feedback Message-ID: Hi, Over at nibabel gh-344 [1], we found ourselves discussing how to write an iterator that will allow you to efficiently iterate over slices from the image array. We'd love some feedback on where we got to. As some of you may know, images now have a `dataobj` attribute, that can contain one of two things: * an array proxy (if you loaded the image from a file); * a numpy array (if you created the image with data from an array); The array proxy object has some fancy slicing syntax that means that something like ``arr.dataobj[..., 0]`` will only read the data for the first slice on the last axis. This can be a lot more efficient that loading all the data at once with `get_data` [2]. We're currently thinking of a good iterator syntax, something like this: for vol in img.iteraxis(3): # iterate over 4th axis # do something with vol where `iteraxis` would use `databobj` slicing under the hood. The questions are: * should this be a method on the image (`img.iteraxis`), the dataobj (`img.dataobj.iteraxis`) or should it be a standalone function that knows about arrays and array proxies? (`nibabel.iteraxis`); * how should the iterator optimize speed or memory? Should this be configurable? For example, if you are iterating over the first axis of a Nifti, then it will probably be most efficient to read all the data into memory and return the slices from the numpy array. This will be very expensive in memory. If a file is compressed, it may be most efficient to uncompress the file and use the uncompressed version with `dataobj` file slicing - but this will involve a temporary file that may be very large. Options are: * find some heuristic to chose joint optimization for memory and speed; * always optimize for memory; * always optimize for speed, saving memory where possible; * have a tuning kwarg selecting between these options. The upside of image.iteraxis would be to embed knowledge we've gained on these objects and simplify the interface for users. The downside is it's more work for us and the right choice is system-dependent. To address this, Ben C proposed a benchmark method, which outputs which optimize method is best for the given image on the current system. Any thoughts? Use-cases? Cheers, Matthew [1] https://github.com/nipy/nibabel/issues/344 [2] http://nipy.org/nibabel/images_and_memory.html#saving-time-and-memory From mwaskom at stanford.edu Sat Sep 5 01:48:27 2015 From: mwaskom at stanford.edu (Michael Waskom) Date: Fri, 4 Sep 2015 16:48:27 -0700 Subject: [Neuroimaging] iteraxis API - we need feedback In-Reply-To: References: Message-ID: Hi Matthew, On Fri, Sep 4, 2015 at 4:06 PM, Matthew Brett wrote: > Hi, > > Over at nibabel gh-344 [1], we found ourselves discussing how to write > an iterator that will allow you to efficiently iterate over slices > from the image array. We'd love some feedback on where we got to. > > As some of you may know, images now have a `dataobj` attribute, that > can contain one of two things: > > * an array proxy (if you loaded the image from a file); > * a numpy array (if you created the image with data from an array); > > The array proxy object has some fancy slicing syntax that means that > something like ``arr.dataobj[..., 0]`` will only read the data for the > first slice on the last axis. This can be a lot more efficient that > loading all the data at once with `get_data` [2]. > > We're currently thinking of a good iterator syntax, something like this: > > for vol in img.iteraxis(3): # iterate over 4th axis > # do something with vol > Cool! Is it possible to also accept "x", "y", "z", "y" as the axis? > where `iteraxis` would use `databobj` slicing under the hood. > > The questions are: > > * should this be a method on the image (`img.iteraxis`), the dataobj > (`img.dataobj.iteraxis`) or should it be a standalone function that > knows about arrays and array proxies? (`nibabel.iteraxis`); > * how should the iterator optimize speed or memory? Should this be > configurable? For example, if you are iterating over the first axis > of a Nifti, then it will probably be most efficient to read all the > data into memory and return the slices from the numpy array. This > will be very expensive in memory. If a file is compressed, it may be > most efficient to uncompress the file and use the uncompressed version > with `dataobj` file slicing - but this will involve a temporary file > that may be very large. Options are: > > * find some heuristic to chose joint optimization for memory and speed; > * always optimize for memory; > * always optimize for speed, saving memory where possible; > * have a tuning kwarg selecting between these options. > I think I would lean towards optimizing memory. You can always just wait longer if things are running slow, but if your RAM fills up, you're stuck. I do like the idea of a tuning parameter, though. > The upside of image.iteraxis would be to embed knowledge we've gained > on these objects and simplify the interface for users. The downside is > it's more work for us and the right choice is system-dependent. To > address this, Ben C proposed a benchmark method, which outputs which > optimize method is best for the given image on the current system. > > Any thoughts? Use-cases? > > Cheers, > > Matthew > > > [1] https://github.com/nipy/nibabel/issues/344 > [2] http://nipy.org/nibabel/images_and_memory.html#saving-time-and-memory > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwaskom at stanford.edu Sat Sep 5 01:50:39 2015 From: mwaskom at stanford.edu (Michael Waskom) Date: Fri, 4 Sep 2015 16:50:39 -0700 Subject: [Neuroimaging] iteraxis API - we need feedback In-Reply-To: References: Message-ID: On Fri, Sep 4, 2015 at 4:48 PM, Michael Waskom wrote: > Cool! Is it possible to also accept "x", "y", "z", "y" as the axis? > Oops, make that "x", "y", "z", "t" -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Sep 5 02:57:50 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 5 Sep 2015 01:57:50 +0100 Subject: [Neuroimaging] iteraxis API - we need feedback In-Reply-To: References: Message-ID: On Sat, Sep 5, 2015 at 12:50 AM, Michael Waskom wrote: > > On Fri, Sep 4, 2015 at 4:48 PM, Michael Waskom wrote: >> >> Cool! Is it possible to also accept "x", "y", "z", "y" as the axis? > > > Oops, make that "x", "y", "z", "t" Yes, soon, that should be possible. I am also thinking of adding an 'axes' attribute with labels for the axes. In that case you could also do things like `img.iteraxis('slice')` or `img.iteraxis('time')`. Cheers, Matthew From satra at mit.edu Sat Sep 5 04:22:31 2015 From: satra at mit.edu (Satrajit Ghosh) Date: Fri, 4 Sep 2015 22:22:31 -0400 Subject: [Neuroimaging] iteraxis API - we need feedback In-Reply-To: References: Message-ID: hi matthew, > for vol in img.iteraxis(3): # iterate over 4th axis > # do something with vol > > where `iteraxis` would use `databobj` slicing under the hood. > > The questions are: > > * should this be a method on the image (`img.iteraxis`), the dataobj > (`img.dataobj.iteraxis`) or should it be a standalone function that > knows about arrays and array proxies? (`nibabel.iteraxis`); > img.iteraxis seems like a good place. > * how should the iterator optimize speed or memory? Should this be > configurable? For example, if you are iterating over the first axis > of a Nifti, then it will probably be most efficient to read all the > data into memory and return the slices from the numpy array. This > will be very expensive in memory. If a file is compressed, it may be > most efficient to uncompress the file and use the uncompressed version > with `dataobj` file slicing - but this will involve a temporary file > that may be very large. Options are: > > * find some heuristic to chose joint optimization for memory and speed; > * always optimize for memory; > * always optimize for speed, saving memory where possible; > * have a tuning kwarg selecting between these options. > i don't know if there is a common heuristic - it really depends on the data characteristics as well as the system configuration. > The upside of image.iteraxis would be to embed knowledge we've gained > on these objects and simplify the interface for users. could you please clarify what you mean by "these objects"? Any thoughts? Use-cases? > thoughts/questions: - would iteraxis be for volume only or support surface and streamline formats? - recommend testing these with hcp data. they are closer to resolution and size of what most datasets will look like in 5 years. - stay away from labels for axes or dimensions - this would be dependent on phase encoding direction (for epi images) as well as placement of object in the scanner. i think nibabel should not have to figure that out. if during construction the user labels these axes, then nibabel could use that information. - [forget i'm saying this, but this is a general solution to the optimization problem] one could just change the format and store nii as an hdf5 dataset and you get both memory and speed optimization! cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Sep 5 05:02:53 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 5 Sep 2015 04:02:53 +0100 Subject: [Neuroimaging] iteraxis API - we need feedback In-Reply-To: References: Message-ID: Hi, On Sat, Sep 5, 2015 at 3:22 AM, Satrajit Ghosh wrote: > hi matthew, > >> >> for vol in img.iteraxis(3): # iterate over 4th axis >> # do something with vol >> >> where `iteraxis` would use `databobj` slicing under the hood. >> >> The questions are: >> >> * should this be a method on the image (`img.iteraxis`), the dataobj >> (`img.dataobj.iteraxis`) or should it be a standalone function that >> knows about arrays and array proxies? (`nibabel.iteraxis`); > > > img.iteraxis seems like a good place. > >> >> * how should the iterator optimize speed or memory? Should this be >> configurable? For example, if you are iterating over the first axis >> of a Nifti, then it will probably be most efficient to read all the >> data into memory and return the slices from the numpy array. This >> will be very expensive in memory. If a file is compressed, it may be >> most efficient to uncompress the file and use the uncompressed version >> with `dataobj` file slicing - but this will involve a temporary file >> that may be very large. Options are: >> >> * find some heuristic to chose joint optimization for memory and >> speed; >> * always optimize for memory; >> * always optimize for speed, saving memory where possible; >> * have a tuning kwarg selecting between these options. > > > i don't know if there is a common heuristic - it really depends on the data > characteristics as well as the system configuration. > >> >> The upside of image.iteraxis would be to embed knowledge we've gained >> on these objects and simplify the interface for users. > > > could you please clarify what you mean by "these objects"? Sorry - I wasn't being clear - I mean knowledge of the dataobj objects. >> Any thoughts? Use-cases? > > > thoughts/questions: > > - would iteraxis be for volume only or support surface and streamline > formats? The obvious case is axes of arrays. I guess, when we've worked those out, we can see if an 'axis' makes sense for something like streamlines. > - recommend testing these with hcp data. they are closer to resolution and > size of what most datasets will look like in 5 years. > - stay away from labels for axes or dimensions - this would be dependent on > phase encoding direction (for epi images) as well as placement of object in > the scanner. i think nibabel should not have to figure that out. if during > construction the user labels these axes, then nibabel could use that > information. We can work out the meaning of some axes - such as time - and the Nifti format +/- the json extension can give us more information if stored. Guessing would certainly be a bad idea. > - [forget i'm saying this, but this is a general solution to the > optimization problem] one could just change the format and store nii as an > hdf5 dataset and you get both memory and speed optimization! Sure - one day I suppose no-one will be using Nifti format, and on that day - er - wait - I've forgotten what you were saying :) Cheers, Matthew From njs at pobox.com Sat Sep 5 05:30:46 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 4 Sep 2015 20:30:46 -0700 Subject: [Neuroimaging] iteraxis API - we need feedback In-Reply-To: References: Message-ID: On Sep 4, 2015 5:58 PM, "Matthew Brett" wrote: > > On Sat, Sep 5, 2015 at 12:50 AM, Michael Waskom wrote: > > > > On Fri, Sep 4, 2015 at 4:48 PM, Michael Waskom wrote: > >> > >> Cool! Is it possible to also accept "x", "y", "z", "y" as the axis? > > > > > > Oops, make that "x", "y", "z", "t" > > Yes, soon, that should be possible. I am also thinking of adding an > 'axes' attribute with labels for the axes. In that case you could > also do things like `img.iteraxis('slice')` or `img.iteraxis('time')`. If you're moving in this direction then should also check out xray -- it's like pandas but for multidimensional data. I don't know if it'd be useful for you directly, but at the least it's be nice to use similar spellings when it makes sense! -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelnotter at hotmail.com Thu Sep 3 15:25:37 2015 From: michaelnotter at hotmail.com (Michael Notter) Date: Thu, 3 Sep 2015 15:25:37 +0200 Subject: [Neuroimaging] Nipy: How can I install Nipy v0.4.0 on ubuntu with pip? Message-ID: Hallo everyone, If I use the command `pip install nipy` to install nipy on my Ubuntu 14.04.3 system, it only installs version 0.3.0 and not the newest version 0.4.0. According http://nipy.org/nipy/users/installation.html, I should use `sudo apt-get install python-nipy` to install nipy. But this would lead to the download of 40 new packages and a disk space increase of 706 MB. This seems a bit extreme, as I already have the dependencies Numpy, Scipy, Sympy, Nibabel and Matplotlib. The new packages are the following: dvipng fonts-cabin fonts-comfortaa fonts-freefont-otf fonts-gfs-artemisia fonts-gfs-complutum fonts-gfs-didot fonts-gfs-neohellenic fonts-gfs-olga fonts-gfs-solomos fonts-inconsolata fonts-junicode fonts-lato fonts-linuxlibertine fonts-lobster fonts-lobstertwo fonts-oflb-asana-math fonts-sil-gentium fonts-sil-gentium-basic fonts-stix libwxbase2.8-0 libwxgtk-media2.8-0 libwxgtk2.8-0 mayavi2 python-apptools python-envisage python-nipy python-nipy-lib python-pyface python-sympy python-traits python-traitsui python-vtk python-wxgtk2.8 python-wxversion tcl-vtk texlive-fonts-extra texlive-fonts-extra-doc ttf-adf-accanthis ttf-adf-gillius This list might be so long, because I'm using anaconda, instead of enthought for my python distribution. Is there a reason why I can't install the newest version of nipy with pip on Ubuntu? Best, Michael PS: I just saw that the installation via github and the command `python setup.py install` works (with allot of "package missing" messages), but only if I first remove any older nipy version from my python/site-packages directory. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Sep 5 07:12:12 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 5 Sep 2015 06:12:12 +0100 Subject: [Neuroimaging] nibabel.trackvis.read error In-Reply-To: References: <55E81150.7000903@erasmusmc.nl> <55E98001.7070902@erasmusmc.nl> Message-ID: Hi, On Fri, Sep 4, 2015 at 11:18 PM, Matthew Brett wrote: > Hi, > > On Fri, Sep 4, 2015 at 12:27 PM, C.D. Langen wrote: >> Hi Matthew, >> >> Thank you for your quick reply. Below are links to two datasets, one >> that failed to be read by nib.trackvis, and one that succeeded. Both can >> be viewed in Trackvis: >> >> https://dl.dropboxusercontent.com/u/57089115/fail.trk >> https://dl.dropboxusercontent.com/u/57089115/succeed.trk >> >> Best, >> Carolyn >> >> On 03-09-15 19:31, Matthew Brett wrote: >>> Hi, >>> >>> On Thu, Sep 3, 2015 at 10:22 AM, C.D. Langen wrote: >>>> Greetings, >>>> >>>> When I try to run the following line of code: >>>> >>>> streams, hdr = nib.trackvis.read(os.path.join(subjDir, 'dti.trk'), >>>> points_space='voxel') >>>> >>>> I get the following error, but only for a small subset of subjects: >>>> >>>> File >>>> "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", >>>> line 223, in read >>>> streamlines = list(streamlines) >>>> File >>>> "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", >>>> line 202, in track_gen >>>> buffer = pts_str) >>>> TypeError: buffer is too small for requested array >>>> >>>> >>>> Someone else had a similar error >>>> (http://mail.scipy.org/pipermail/nipy-devel/2012-March/007272.html) >>>> which they resolved by using nibabel from github. I tried this, but got >>>> the same error. >>>> >>>> All subjects' trackvis files were produced in exactly the same way using >>>> Trackvis, so I am not sure why only a few subjects fail while others >>>> succeed. Any ideas? >>>> >>>> Thank you in advance for your help in resolving this issue. >>> Thanks for the report - would you mind put the file online somewhere >>> so we can have a look? > > What seems to be happening is that the last track in the file is > truncated. It says that it is 120 points long (n_pts field), but > there is only data in the file for 77 points. > > I tried reading the file with this MATLAB toolbox : > https://github.com/johncolby/along-tract-stats > >>> [header, tracks] = trk_read('fail.trk'); >>> tracks(end) > > ans = > > nPoints: 120 > matrix: [77x3 single] > >>> tracks(end-1) > > ans = > > nPoints: 91 > matrix: [91x3 single] > > Note that the last track has 120 'nPoints' but only 77 points. The > previous track has 91 'nPoints' and 91 points, which is what I would > expect. So I think the file is mal-formed and trackvis is being more > generous than nibabel. I think nibabel should have a mode where it > passes through this kind of thing. In the meantime, if you want to > read all but the last shortened track, you could do something like > this: > > import nibabel as nib > > track_gen, hdr = nib.trackvis.read('fail.trk', as_generator=True) > > tracks = [] > while True: > try: > track = next(track_gen) > except (StopIteration, TypeError): > break > tracks.append(track) I put up a potential fix here : https://github.com/nipy/nibabel/pull/346 If merged, this will allow the following: tracks, hdr = nib.trackvis.read('fail.trk', errors='lenient') Comments welcome, Cheers, Matthew From bertrand.thirion at inria.fr Sat Sep 5 22:17:34 2015 From: bertrand.thirion at inria.fr (bthirion) Date: Sat, 5 Sep 2015 22:17:34 +0200 Subject: [Neuroimaging] iteraxis API - we need feedback In-Reply-To: References: Message-ID: <55EB4DDE.40203@inria.fr> On 05/09/2015 04:22, Satrajit Ghosh wrote: > > The questions are: > > * should this be a method on the image (`img.iteraxis`), the dataobj > (`img.dataobj.iteraxis`) or should it be a standalone function that > knows about arrays and array proxies? (`nibabel.iteraxis`); > > > img.iteraxis seems like a good place. +1 Best, Bertrand -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Mon Sep 7 04:37:08 2015 From: jbpoline at gmail.com (JB Poline) Date: Sun, 6 Sep 2015 19:37:08 -0700 Subject: [Neuroimaging] iteraxis API - we need feedback In-Reply-To: <55EB4DDE.40203@inria.fr> References: <55EB4DDE.40203@inria.fr> Message-ID: img.iteraxis also looks the most intuitive to me cheers JB On Sat, Sep 5, 2015 at 1:17 PM, bthirion wrote: > On 05/09/2015 04:22, Satrajit Ghosh wrote: > > >> The questions are: >> >> * should this be a method on the image (`img.iteraxis`), the dataobj >> (`img.dataobj.iteraxis`) or should it be a standalone function that >> knows about arrays and array proxies? (`nibabel.iteraxis`); > > > img.iteraxis seems like a good place. > > +1 > > Best, > > Bertrand > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > From mrbago at gmail.com Sat Sep 12 03:00:10 2015 From: mrbago at gmail.com (Bago) Date: Fri, 11 Sep 2015 18:00:10 -0700 Subject: [Neuroimaging] Has anyone tried using dipy's piesno function? Message-ID: Hi all, I've been playing around with dipy's piesno function and wanted to know if anyone else had much experience with it. I've found that most of our data has very low intensities in the background and that piesno tends to grossly underestimate the noise in these data sets. I've tried estimating the noise a few other ways including the estimate_noise function from the same module. I know that piesno was not developed for diffusion data, but wanted to know if anyone is regularly using it for diffusion data and how well it's working for them. Thanks, Bago -------------- next part -------------- An HTML attachment was scrubbed... URL: From stjeansam at gmail.com Sat Sep 12 12:23:41 2015 From: stjeansam at gmail.com (Samuel St-Jean) Date: Sat, 12 Sep 2015 06:23:41 -0400 Subject: [Neuroimaging] Has anyone tried using dipy's piesno function? In-Reply-To: References: Message-ID: It's the opposite, piesno is made for diffusion data and estimate_sigma is not. Both will give weird result if the scanner changes the intensity of the background. And finally, I am totally biased toward piesno since the math behind it makes sense, provided you have stationnary background on each slice. Scanners don't always respect that one for esthetic purposes apparently. On Sep 12, 2015 3:00 AM, "Bago" wrote: > Hi all, > > I've been playing around with dipy's piesno function and wanted to know > if anyone else had much experience with it. I've found that most of our > data has very low intensities in the background and that piesno tends to > grossly underestimate the noise in these data sets. I've tried estimating > the noise a few other ways including the estimate_noise function from the > same module. > > I know that piesno was not developed for diffusion data, but wanted to > know if anyone is regularly using it for diffusion data and how well it's > working for them. > > Thanks, > Bago > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njvack at wisc.edu Mon Sep 14 21:06:36 2015 From: njvack at wisc.edu (Nate Vack) Date: Mon, 14 Sep 2015 19:06:36 +0000 Subject: [Neuroimaging] Electrophysiology software in python? Message-ID: Hi all, Sorry if this is OT, but maybe it isn't? Anyhow: I'm looking around to see if there's a decent Python package for the scoring, processing, and analysis of electrophysiology (EDA, EMG, respiration, etc) data in python. There's PyMNE, but it looks like that's fairly specifically geared towards EEG/EMG -- is there anything that's more oriented in the "small number of channels that aren't necessarily EEG-like" direction? There's a part of my brain that feels like it's seen something like this before, but my Google-fu is proving too weak. Thanks, -Nate -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwaskom at stanford.edu Mon Sep 14 21:10:54 2015 From: mwaskom at stanford.edu (Michael Waskom) Date: Mon, 14 Sep 2015 12:10:54 -0700 Subject: [Neuroimaging] Electrophysiology software in python? In-Reply-To: References: Message-ID: Hi Nate: I think nitime might have much of what you are looking for? http://nipy.org/nitime/ Best, Michael On Mon, Sep 14, 2015 at 12:06 PM, Nate Vack wrote: > Hi all, > > Sorry if this is OT, but maybe it isn't? Anyhow: I'm looking around to see > if there's a decent Python package for the scoring, processing, and > analysis of electrophysiology (EDA, EMG, respiration, etc) data in python. > There's PyMNE, but it looks like that's fairly specifically geared towards > EEG/EMG -- is there anything that's more oriented in the "small number of > channels that aren't necessarily EEG-like" direction? > > There's a part of my brain that feels like it's seen something like this > before, but my Google-fu is proving too weak. > > Thanks, > -Nate > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Mon Sep 14 21:44:29 2015 From: lists at onerussian.com (Yaroslav Halchenko) Date: Mon, 14 Sep 2015 15:44:29 -0400 Subject: [Neuroimaging] Electrophysiology software in python? In-Reply-To: References: Message-ID: <20150914194429.GV10728@onerussian.com> have a look at listings withing http://www.onerussian.com/tmp/eppy-handout.pdf in particular stimfit and spykeviewer On Mon, 14 Sep 2015, Nate Vack wrote: > Hi all, > Sorry if this is OT, but maybe it isn't? Anyhow: I'm looking around to see if > there's a decent Python package for the scoring, processing, and analysis of > electrophysiology (EDA, EMG, respiration, etc) data in python. There's PyMNE, > but it looks like that's fairly specifically geared towards EEG/EMG -- is there > anything that's more oriented in the "small number of channels that aren't > necessarily EEG-like" direction? > There's a part of my brain that feels like it's seen something like this > before, but my Google-fu is proving too weak. > Thanks, > -Nate > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From denis.engemann at gmail.com Mon Sep 14 22:30:01 2015 From: denis.engemann at gmail.com (Denis-Alexander Engemann) Date: Mon, 14 Sep 2015 22:30:01 +0200 Subject: [Neuroimaging] Electrophysiology software in python? In-Reply-To: References: Message-ID: Hi Nate, please take a deeper look into MNE, it's very flexible and people are even using it for ECOG. See the blog posts by Chris Holdgraf, for example. http://chrisholdgraf.com/using-mne-with-custom-or-non-standardized-data-formats/ Best, Denis On Mon, Sep 14, 2015 at 9:10 PM, Michael Waskom wrote: > Hi Nate: > > I think nitime might have much of what you are looking for? > http://nipy.org/nitime/ > > Best, > Michael > > On Mon, Sep 14, 2015 at 12:06 PM, Nate Vack wrote: > >> Hi all, >> >> Sorry if this is OT, but maybe it isn't? Anyhow: I'm looking around to >> see if there's a decent Python package for the scoring, processing, and >> analysis of electrophysiology (EDA, EMG, respiration, etc) data in python. >> There's PyMNE, but it looks like that's fairly specifically geared towards >> EEG/EMG -- is there anything that's more oriented in the "small number of >> channels that aren't necessarily EEG-like" direction? >> >> There's a part of my brain that feels like it's seen something like this >> before, but my Google-fu is proving too weak. >> >> Thanks, >> -Nate >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njvack at wisc.edu Mon Sep 14 21:48:40 2015 From: njvack at wisc.edu (Nate Vack) Date: Mon, 14 Sep 2015 19:48:40 +0000 Subject: [Neuroimaging] Electrophysiology software in python? In-Reply-To: References: Message-ID: It's pretty close -- at least, it has the numerical methods we're looking for. The big thing we're missing, though, is a GUI for QC and scoring. Thanks! -n On Mon, Sep 14, 2015 at 2:18 PM Michael Waskom wrote: > Hi Nate: > > I think nitime might have much of what you are looking for? > http://nipy.org/nitime/ > > Best, > Michael > > On Mon, Sep 14, 2015 at 12:06 PM, Nate Vack wrote: > >> Hi all, >> >> Sorry if this is OT, but maybe it isn't? Anyhow: I'm looking around to >> see if there's a decent Python package for the scoring, processing, and >> analysis of electrophysiology (EDA, EMG, respiration, etc) data in python. >> There's PyMNE, but it looks like that's fairly specifically geared towards >> EEG/EMG -- is there anything that's more oriented in the "small number of >> channels that aren't necessarily EEG-like" direction? >> >> There's a part of my brain that feels like it's seen something like this >> before, but my Google-fu is proving too weak. >> >> Thanks, >> -Nate >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.gramfort at telecom-paristech.fr Tue Sep 15 09:20:24 2015 From: alexandre.gramfort at telecom-paristech.fr (Alexandre Gramfort) Date: Tue, 15 Sep 2015 09:20:24 +0200 Subject: [Neuroimaging] Electrophysiology software in python? In-Reply-To: References: Message-ID: hi, wrt GUIs if you have a Raw object in mne-python you can just do: raw.plot() you'll have a interactive data browser that you can use for QC and scoring. You cannot however currently manually annotate the data. HTH Alex On Mon, Sep 14, 2015 at 9:48 PM, Nate Vack wrote: > It's pretty close -- at least, it has the numerical methods we're looking > for. The big thing we're missing, though, is a GUI for QC and scoring. > > Thanks! > -n > > On Mon, Sep 14, 2015 at 2:18 PM Michael Waskom wrote: >> >> Hi Nate: >> >> I think nitime might have much of what you are looking for? >> http://nipy.org/nitime/ >> >> Best, >> Michael >> >> On Mon, Sep 14, 2015 at 12:06 PM, Nate Vack wrote: >>> >>> Hi all, >>> >>> Sorry if this is OT, but maybe it isn't? Anyhow: I'm looking around to >>> see if there's a decent Python package for the scoring, processing, and >>> analysis of electrophysiology (EDA, EMG, respiration, etc) data in python. >>> There's PyMNE, but it looks like that's fairly specifically geared towards >>> EEG/EMG -- is there anything that's more oriented in the "small number of >>> channels that aren't necessarily EEG-like" direction? >>> >>> There's a part of my brain that feels like it's seen something like this >>> before, but my Google-fu is proving too weak. >>> >>> Thanks, >>> -Nate >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > From krzysztof.gorgolewski at gmail.com Wed Sep 16 01:23:24 2015 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Tue, 15 Sep 2015 16:23:24 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released Message-ID: Dear all, I'm very proud to announce the next release of Nipype - 0.11. This release brings many improvements in support of FSL, SPM (especially SPM12), AFNI, ANTs, and MRTrix (especially MRTrix3). Among new features we introduced support for JSON , more efficient execution on SLURM clusters , and a new tool - nipype_cmd - that allows you to run any Nipype interfaces on the command line. The last feature will be especially useful when trying to run Python or SPM based algorithms from command line or in Bash scripts. Full list of changes is available here . Nipype can be installed from PyPi by typing: pip install -U nipype or easy_install -U nipype It should be also available in NeuroDebian soon. On behalf of all of the contributors , Chris Gorgolewski -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Thu Sep 17 16:20:02 2015 From: bertrand.thirion at inria.fr (bthirion) Date: Thu, 17 Sep 2015 16:20:02 +0200 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: Message-ID: <55FACC12.9060409@inria.fr> Great, congratulations ! Bertrand On 16/09/2015 01:23, Chris Filo Gorgolewski wrote: > Dear all, > I'm very proud to announce the next release of Nipype - 0.11. This > release brings many improvements in support of FSL, SPM (especially > SPM12), AFNI, ANTs, and MRTrix (especially MRTrix3). Among new > features we introduced support for JSON > , more efficient execution > on SLURM clusters , and a > new tool - nipype_cmd - > that allows you to run any Nipype interfaces on the command line. The > last feature will be especially useful when trying to run Python or > SPM based algorithms from command line or in Bash scripts. Full list > of changes is available here . > > Nipype can be installed from PyPi by typing: > > pip install -U nipype > > or > > easy_install -U nipype > > It should be also available in NeuroDebian soon. > > On behalf of all of the contributors > , > Chris Gorgolewski > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Sep 17 18:40:58 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 17 Sep 2015 09:40:58 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: <55FACC12.9060409@inria.fr> References: <55FACC12.9060409@inria.fr> Message-ID: Hi guys, On Thu, Sep 17, 2015 at 7:20 AM, bthirion wrote: > Great, congratulations ! Congratulations too. It might be worth checking the buildbots - some of them were offline until recently, but they are all red at the moment... Cheers, Matthew From krzysztof.gorgolewski at gmail.com Thu Sep 17 18:54:35 2015 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Thu, 17 Sep 2015 09:54:35 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: Do you mean CircleCI? They had some issues with their infrastructure, but the tests are rerunning now. Best, Chris On Thu, Sep 17, 2015 at 9:40 AM, Matthew Brett wrote: > Hi guys, > > On Thu, Sep 17, 2015 at 7:20 AM, bthirion > wrote: > > Great, congratulations ! > > Congratulations too. It might be worth checking the buildbots - some > of them were offline until recently, but they are all red at the > moment... > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Sep 17 18:56:25 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 17 Sep 2015 09:56:25 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: Hi, On Thu, Sep 17, 2015 at 9:54 AM, Chris Filo Gorgolewski wrote: > Do you mean CircleCI? They had some issues with their infrastructure, but > the tests are rerunning now. Sorry - no - I meant the buildbots : http://nipy.bic.berkeley.edu/builders See you, Matthew From vsochat at stanford.edu Thu Sep 17 18:58:59 2015 From: vsochat at stanford.edu (vanessa sochat) Date: Thu, 17 Sep 2015 09:58:59 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: What is a buildbot? On Thu, Sep 17, 2015 at 9:56 AM, Matthew Brett wrote: > Hi, > > On Thu, Sep 17, 2015 at 9:54 AM, Chris Filo Gorgolewski > wrote: > > Do you mean CircleCI? They had some issues with their infrastructure, but > > the tests are rerunning now. > > Sorry - no - I meant the buildbots : http://nipy.bic.berkeley.edu/builders > > See you, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- Vanessa Villamia Sochat Stanford University (603) 321-0676 -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Thu Sep 17 19:07:33 2015 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Thu, 17 Sep 2015 10:07:33 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: Thanks for the pointer. Weird new errors that neither Travis nor Circle found! BTW you can turn off the windows and python 2.6 builds since we don't support them. Best, Chris On Thu, Sep 17, 2015 at 9:56 AM, Matthew Brett wrote: > Hi, > > On Thu, Sep 17, 2015 at 9:54 AM, Chris Filo Gorgolewski > wrote: > > Do you mean CircleCI? They had some issues with their infrastructure, but > > the tests are rerunning now. > > Sorry - no - I meant the buildbots : http://nipy.bic.berkeley.edu/builders > > See you, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Sep 17 19:11:07 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 17 Sep 2015 10:11:07 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: Hi, On Thu, Sep 17, 2015 at 10:07 AM, Chris Filo Gorgolewski wrote: > Thanks for the pointer. Weird new errors that neither Travis nor Circle > found! > > BTW you can turn off the windows and python 2.6 builds since we don't > support them. I thought you were planning to keep windows going as far as possible : https://github.com/nipy/nipype/issues/1105 ? Certainly it would make nipype much more useful for teaching. Cheers, Matthew From arokem at gmail.com Thu Sep 17 19:12:51 2015 From: arokem at gmail.com (Ariel Rokem) Date: Thu, 17 Sep 2015 10:12:51 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: Hi Vanessa, On Thu, Sep 17, 2015 at 9:58 AM, vanessa sochat wrote: > What is a buildbot? > > It's infrastructure for downloading the software, installing it, and testing it on a variety of platforms. IIUC, It depends on this http://buildbot.net/ The nipy nibotmi repo (https://github.com/nipy/nibotmi) is the configuration for this thing. Cheers, Ariel > On Thu, Sep 17, 2015 at 9:56 AM, Matthew Brett > wrote: > >> Hi, >> >> On Thu, Sep 17, 2015 at 9:54 AM, Chris Filo Gorgolewski >> wrote: >> > Do you mean CircleCI? They had some issues with their infrastructure, >> but >> > the tests are rerunning now. >> >> Sorry - no - I meant the buildbots : >> http://nipy.bic.berkeley.edu/builders >> >> See you, >> >> Matthew >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > > > -- > Vanessa Villamia Sochat > Stanford University > (603) 321-0676 > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Thu Sep 17 19:13:19 2015 From: arokem at gmail.com (Ariel Rokem) Date: Thu, 17 Sep 2015 10:13:19 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: Also: Congratulations and hooray!! On Thu, Sep 17, 2015 at 10:11 AM, Matthew Brett wrote: > Hi, > > On Thu, Sep 17, 2015 at 10:07 AM, Chris Filo Gorgolewski > wrote: > > Thanks for the pointer. Weird new errors that neither Travis nor Circle > > found! > > > > BTW you can turn off the windows and python 2.6 builds since we don't > > support them. > > I thought you were planning to keep windows going as far as possible : > https://github.com/nipy/nipype/issues/1105 ? > > Certainly it would make nipype much more useful for teaching. > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Sep 17 19:13:11 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 17 Sep 2015 10:13:11 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: Hi, On Thu, Sep 17, 2015 at 9:58 AM, vanessa sochat wrote: > What is a buildbot? http://buildbot.net/ - "The Continuous Integration Framework" In general a buildbot farm is a set of real computers set up to build and test your code when you do a commit e.g. to github. The advantage is that it can test on a large variety of configurations, including 32 / 64 bit windows, OSX from 10.6, different linuxes, PPC / big-endian and so on. We've had a lot of use from the buildbots ironing out subtle compilation and testing errors on different platforms. Cheers, Matthew From vsochat at stanford.edu Thu Sep 17 19:20:55 2015 From: vsochat at stanford.edu (vanessa sochat) Date: Thu, 17 Sep 2015 10:20:55 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: Ah, very cool! It sounds like it is in the family of continuous integration. I don't set up different OS / environments to test on in any of my repos, but I remember seeing that the old travis version of nilearn did . This makes a lot of sense to ensure that users with different flavors of computers get working functionality. Thanks for the details! On Thu, Sep 17, 2015 at 10:12 AM, Ariel Rokem wrote: > Hi Vanessa, > > On Thu, Sep 17, 2015 at 9:58 AM, vanessa sochat > wrote: > >> What is a buildbot? >> >> It's infrastructure for downloading the software, installing it, and > testing it on a variety of platforms. IIUC, It depends on this > http://buildbot.net/ > > The nipy nibotmi repo (https://github.com/nipy/nibotmi) is the > configuration for this thing. > > Cheers, > > Ariel > > > >> On Thu, Sep 17, 2015 at 9:56 AM, Matthew Brett >> wrote: >> >>> Hi, >>> >>> On Thu, Sep 17, 2015 at 9:54 AM, Chris Filo Gorgolewski >>> wrote: >>> > Do you mean CircleCI? They had some issues with their infrastructure, >>> but >>> > the tests are rerunning now. >>> >>> Sorry - no - I meant the buildbots : >>> http://nipy.bic.berkeley.edu/builders >>> >>> See you, >>> >>> Matthew >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> >> >> >> -- >> Vanessa Villamia Sochat >> Stanford University >> (603) 321-0676 >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Vanessa Villamia Sochat Stanford University (603) 321-0676 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Sep 17 19:41:55 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 17 Sep 2015 10:41:55 -0700 Subject: [Neuroimaging] Nipype 0.11 was just released In-Reply-To: References: <55FACC12.9060409@inria.fr> Message-ID: On Thu, Sep 17, 2015 at 10:20 AM, vanessa sochat wrote: > Ah, very cool! It sounds like it is in the family of continuous integration. > I don't set up different OS / environments to test on in any of my repos, > but I remember seeing that the old travis version of nilearn did. This makes > a lot of sense to ensure that users with different flavors of computers get > working functionality. Thanks for the details! Yes, we've found a lot of cross-platform stuff in dipy in particular, trying to get binary code compiling and running on Windows / OSX. But even with nibabel, it has been very useful to catch subtle differences in Python / numpy configuration, such as different float precisions. It also allows us to test a somewhat random selection of numpy and scipy versions, which often catches changes in those libraries. Other examples: * Python itself : https://www.python.org/dev/buildbot * Webkit : https://build.webkit.org/ * Mozilla : https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound * Google Chromium : http://build.chromium.org/p/chromium/waterfall (beautiful!) * GNOME : http://build.gnome.org (from http://trac.buildbot.net/wiki/SuccessStories) Cheers, Matthew From katiesurrence at gmail.com Fri Sep 18 20:13:24 2015 From: katiesurrence at gmail.com (Katie Surrence) Date: Fri, 18 Sep 2015 14:13:24 -0400 Subject: [Neuroimaging] FmriRealign4D: how to make it write files? Message-ID: Dear all, I asked this question on neurostars, but haven't received a reply, so I thought I'd try here. When I run a program containing this code snippet: ______________________ for sub in subjects.keys(): for run in subjects[sub]: infile = os.path.join(root, sub, 'func', run, '4D.nii') realigner = FmriRealign4d() realigner.inputs.in_file = [infile] realigner.inputs.tr = 2.2 realigner.inputs.slice_order = range(0, 34, 2) + range(1, 35, 2) realigner.inputs.time_interp = True outfile = os.path.join(root, sub, 'func', run, 'ra4D.nii') f = open(outfile, 'w') f.close() realigner._out_file = [outfile] parfile = os.path.join(root, sub, 'func', run, 'motpar.txt') p = open(parfile, 'w') p.close() realigner._par_file = [parfile] res = realigner.run() ________________ I get a bunch of output to the console to indicate realignment/slice timing is happening, but the only changes on disk are that my files are created. They remain at zero bytes. (Specifying the _out_file and _par_file attributes and making empty versions of the files were things I tried to coax it into writing files. My original version did neither of these things and had the same problem.) I appreciate any help you can provide. Best, Katie -- Katie Surrence, M.S. Research Coordinator Social Cognition Laboratory New York State Psychiatric Institute -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Sep 19 19:55:57 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 19 Sep 2015 10:55:57 -0700 Subject: [Neuroimaging] nibabel.trackvis.read error In-Reply-To: References: <55E81150.7000903@erasmusmc.nl> <55E98001.7070902@erasmusmc.nl> Message-ID: On Fri, Sep 4, 2015 at 10:12 PM, Matthew Brett wrote: > Hi, > > On Fri, Sep 4, 2015 at 11:18 PM, Matthew Brett wrote: >> Hi, >> >> On Fri, Sep 4, 2015 at 12:27 PM, C.D. Langen wrote: >>> Hi Matthew, >>> >>> Thank you for your quick reply. Below are links to two datasets, one >>> that failed to be read by nib.trackvis, and one that succeeded. Both can >>> be viewed in Trackvis: >>> >>> https://dl.dropboxusercontent.com/u/57089115/fail.trk >>> https://dl.dropboxusercontent.com/u/57089115/succeed.trk >>> >>> Best, >>> Carolyn >>> >>> On 03-09-15 19:31, Matthew Brett wrote: >>>> Hi, >>>> >>>> On Thu, Sep 3, 2015 at 10:22 AM, C.D. Langen wrote: >>>>> Greetings, >>>>> >>>>> When I try to run the following line of code: >>>>> >>>>> streams, hdr = nib.trackvis.read(os.path.join(subjDir, 'dti.trk'), >>>>> points_space='voxel') >>>>> >>>>> I get the following error, but only for a small subset of subjects: >>>>> >>>>> File >>>>> "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", >>>>> line 223, in read >>>>> streamlines = list(streamlines) >>>>> File >>>>> "/cm/shared/apps/python/2.7.6/lib/python2.7/site-packages/nibabel/trackvis.py", >>>>> line 202, in track_gen >>>>> buffer = pts_str) >>>>> TypeError: buffer is too small for requested array >>>>> >>>>> >>>>> Someone else had a similar error >>>>> (http://mail.scipy.org/pipermail/nipy-devel/2012-March/007272.html) >>>>> which they resolved by using nibabel from github. I tried this, but got >>>>> the same error. >>>>> >>>>> All subjects' trackvis files were produced in exactly the same way using >>>>> Trackvis, so I am not sure why only a few subjects fail while others >>>>> succeed. Any ideas? >>>>> >>>>> Thank you in advance for your help in resolving this issue. >>>> Thanks for the report - would you mind put the file online somewhere >>>> so we can have a look? >> >> What seems to be happening is that the last track in the file is >> truncated. It says that it is 120 points long (n_pts field), but >> there is only data in the file for 77 points. >> >> I tried reading the file with this MATLAB toolbox : >> https://github.com/johncolby/along-tract-stats >> >>>> [header, tracks] = trk_read('fail.trk'); >>>> tracks(end) >> >> ans = >> >> nPoints: 120 >> matrix: [77x3 single] >> >>>> tracks(end-1) >> >> ans = >> >> nPoints: 91 >> matrix: [91x3 single] >> >> Note that the last track has 120 'nPoints' but only 77 points. The >> previous track has 91 'nPoints' and 91 points, which is what I would >> expect. So I think the file is mal-formed and trackvis is being more >> generous than nibabel. I think nibabel should have a mode where it >> passes through this kind of thing. In the meantime, if you want to >> read all but the last shortened track, you could do something like >> this: >> >> import nibabel as nib >> >> track_gen, hdr = nib.trackvis.read('fail.trk', as_generator=True) >> >> tracks = [] >> while True: >> try: >> track = next(track_gen) >> except (StopIteration, TypeError): >> break >> tracks.append(track) > > I put up a potential fix here : https://github.com/nipy/nibabel/pull/346 > > If merged, this will allow the following: > > tracks, hdr = nib.trackvis.read('fail.trk', errors='lenient') It's merged now, so I think the current github version of nibabel will read your file correctly, Best, Matthew From matthew.brett at gmail.com Fri Sep 25 02:08:39 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 24 Sep 2015 17:08:39 -0700 Subject: [Neuroimaging] Upcoming nipy release In-Reply-To: References: <20150828135930.GR19455@onerussian.com> Message-ID: Hi, On Sat, Aug 29, 2015 at 1:59 AM, Matthew Brett wrote: > Hi, > > On Fri, Aug 28, 2015 at 2:59 PM, Yaroslav Halchenko > wrote: >> >> On Fri, 28 Aug 2015, Matthew Brett wrote: >>> I'm just finishing up some fixes to get the tests passing for nipy. >>> If any of you have a few minutes, I would be very grateful for reviews >>> on the current PRs: >> >>> https://github.com/nipy/nipy/pull/348 >>> https://github.com/nipy/nipy/pull/346 >>> https://github.com/nipy/nipy/pull/341 >> >>> One we have all the tests passing on travis, then we are in the hope >>> straight for a release. >> >> AWESOME! >> >> FWIW, I don't remember if I have shared before... >> For DataLad I have setup buildbot to >> >> 1. forward build reports back to travis >> 2. monitor for PRs and run tests on them as well >> >> it might be a well worth addition to nibotmi, although I don't know when >> myself I could jump to implement that, so decided just to share for now. >> >> Demo -- any datalad PR on github (with less than a 100 or so commits, some bug >> causes my ad-hoc setup to not pick up those PRs for testing), e.g.: >> https://github.com/datalad/datalad/pull/101 (scroll to the bottom ;) some are >> failing atm) >> >> Our setup is https://github.com/datalad/buildbot which is in large based on >> https://github.com/ethereum/ethereum-buildbot and apparently work on supporting >> pull_requests in stock buildbot since then was accepted! >> https://github.com/buildbot/buildbot/pull/1632 >> so I might look into redoing it using stock features > > That would be really good - if you do have time to do it. It would be > ideal to get the buildbots integrated into the github interface... After thinking about it for a little bit, I was worried about a) overwhelming the buildbot machines, and b) allowing any PR to execute code on the buildbot machines. I wonder whether we could use Homu instead? https://github.com/barosl/homu I think this would mean that approved reviewers could mark the PR with a comment like '@homu-user r+' to the PR, and this could trigger builds both on travis-ci, and the buildbots, and the PR would only get merged if the all the tests pass. I think. That seems like a good mix - an approved reviewer has to OK it before it is tested on the buildbots and travis before merging. What do you think? Matthew Links: http://homu.io/ https://www.reddit.com/r/rust/comments/39sogp/homu_a_gatekeeper_for_your_commits/ http://graydon.livejournal.com/186550.html http://homu.io/q/numpy/numpy From lists at onerussian.com Fri Sep 25 02:30:33 2015 From: lists at onerussian.com (Yaroslav Halchenko) Date: Thu, 24 Sep 2015 20:30:33 -0400 Subject: [Neuroimaging] Upcoming nipy release In-Reply-To: References: <20150828135930.GR19455@onerussian.com> Message-ID: <20150925003033.GW30459@onerussian.com> On Thu, 24 Sep 2015, Matthew Brett wrote: > >> Our setup is https://github.com/datalad/buildbot which is in large based on > >> https://github.com/ethereum/ethereum-buildbot and apparently work on supporting > >> pull_requests in stock buildbot since then was accepted! > >> https://github.com/buildbot/buildbot/pull/1632 > >> so I might look into redoing it using stock features > > That would be really good - if you do have time to do it. It would be > > ideal to get the buildbots integrated into the github interface... > After thinking about it for a little bit, I was worried about a) > overwhelming the buildbot machines, and b) allowing any PR to execute > code on the buildbot machines. our setup restricts automagic testing of PRs only for a limited set of github logins. For the rest of PRs I (or other developers with enough rights) would need to add a label 'buildbot' to the PR to trigger buildbot considering it (so similar to homu's approvals) > I wonder whether we could use Homu > instead? > https://github.com/barosl/homu I thought I saw all of the possible solutions when I was setting it up a while back, may be Homu too... there seemed to be quite a bit of new development in it since then. > I think this would mean that approved reviewers could mark the PR with > a comment like '@homu-user r+' to the PR, and this could trigger > builds both on travis-ci, and the buildbots, and the PR would only get > merged if the all the tests pass. I think. > That seems like a good mix - an approved reviewer has to OK it before > it is tested on the buildbots and travis before merging. > What do you think? yeah -- sounds good, but on the other hand requires additional interaction (unless there are automatic "approvals" for testing for some logins). it is for sure probably better than the ad-hoc solution I adopted, but it might still be better to re-review what is the built-in support within buildbot atm. It might be easier to provide similar logic if not present "natively" -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From jbpoline at gmail.com Fri Sep 25 08:39:54 2015 From: jbpoline at gmail.com (JB Poline) Date: Fri, 25 Sep 2015 08:39:54 +0200 Subject: [Neuroimaging] Upcoming nipy release In-Reply-To: <20150925003033.GW30459@onerussian.com> References: <20150828135930.GR19455@onerussian.com> <20150925003033.GW30459@onerussian.com> Message-ID: Sounds very reasonable to me! JB On Fri, Sep 25, 2015 at 2:30 AM, Yaroslav Halchenko wrote: > > On Thu, 24 Sep 2015, Matthew Brett wrote: >> >> Our setup is https://github.com/datalad/buildbot which is in large based on >> >> https://github.com/ethereum/ethereum-buildbot and apparently work on supporting >> >> pull_requests in stock buildbot since then was accepted! >> >> https://github.com/buildbot/buildbot/pull/1632 >> >> so I might look into redoing it using stock features > >> > That would be really good - if you do have time to do it. It would be >> > ideal to get the buildbots integrated into the github interface... > >> After thinking about it for a little bit, I was worried about a) >> overwhelming the buildbot machines, and b) allowing any PR to execute >> code on the buildbot machines. > > our setup restricts automagic testing of PRs only for a limited set of > github logins. For the rest of PRs I (or other developers with enough > rights) would need to add a label 'buildbot' to the PR to trigger > buildbot considering it (so similar to homu's approvals) > >> I wonder whether we could use Homu >> instead? >> https://github.com/barosl/homu > > I thought I saw all of the possible solutions when I was setting it up a > while back, may be Homu too... there seemed to be quite a bit of new > development in it since then. > >> I think this would mean that approved reviewers could mark the PR with >> a comment like '@homu-user r+' to the PR, and this could trigger >> builds both on travis-ci, and the buildbots, and the PR would only get >> merged if the all the tests pass. I think. > >> That seems like a good mix - an approved reviewer has to OK it before >> it is tested on the buildbots and travis before merging. > >> What do you think? > > yeah -- sounds good, but on the other hand requires additional > interaction (unless there are automatic "approvals" for testing for some > logins). it is for sure probably better than the ad-hoc solution I > adopted, but it might still be better to re-review what is the built-in > support within buildbot atm. It might be easier to provide similar > logic if not present "natively" > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging From matthew.brett at gmail.com Sat Sep 26 03:27:04 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 25 Sep 2015 18:27:04 -0700 Subject: [Neuroimaging] Upcoming nipy release In-Reply-To: References: <20150828135930.GR19455@onerussian.com> <20150925003033.GW30459@onerussian.com> Message-ID: Hi, I think we're nearly there now. I've reviewed all the PRs but one - I think all the current nipy PRs should go into the release when ready. The one I'm working on at the moment, that doesn't need input from anyone yet, is https://github.com/nipy/nipy/pull/301 by Jonathan. Could y'all have a look at the remaining PRs over the next couple of days, clear up the remaining comments? Thanks a lot, Matthew From gadluru at gmail.com Fri Sep 25 00:02:57 2015 From: gadluru at gmail.com (Ganesh Adluru) Date: Thu, 24 Sep 2015 16:02:57 -0600 Subject: [Neuroimaging] Tractography for DSI data Message-ID: Hello all, I was wondering if dipy supports any type of tractography for DSI data, Thanks Ganesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Sat Sep 26 20:36:58 2015 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Sat, 26 Sep 2015 14:36:58 -0400 Subject: [Neuroimaging] Tractography for DSI data In-Reply-To: References: Message-ID: Hi Ganesh, I hope I understood your question. Let us know otherwise. Yes! Any type of tractography implemented in Dipy will work with DSI data. Of course if you have DSI data you may want to use also a DSI reconstruction model and then feed that to peaks_from_model and to the tracking algorithm. These tutorials are your friends here: http://nipy.org/dipy/examples_built/reconst_dsi.html#example-reconst-dsi http://nipy.org/dipy/examples_built/tracking_quick_start.html#example-tracking-quick-start And then when you get the main idea move into more advanced tracking here http://nipy.org/dipy/examples_built/introduction_to_basic_tracking.html#example-introduction-to-basic-tracking You may also want to have a look at the Deconvolved DSI using Canales-Rodriguez method. You can use this method to create sharper ODFs and therefore improve tracking. http://nipy.org/dipy/examples_built/reconst_dsid.html#example-reconst-dsid Alternatively you can also use the SHORE model with DSI data. http://nipy.org/dipy/examples_built/reconst_shore.html#example-reconst-shore Was this information helpful to you? Please give us some feedback if you start using these methods implemented in Dipy. Best regards, Eleftherios On Thu, Sep 24, 2015 at 6:02 PM, Ganesh Adluru wrote: > Hello all, > > I was wondering if dipy supports any type of tractography for DSI data, > > Thanks > Ganesh > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Sep 29 02:13:40 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Sep 2015 17:13:40 -0700 Subject: [Neuroimaging] compute_mask - algorithm changed? Message-ID: Hi guys, One of our post-docs here was just asking about `nipy.labs.compute_mask`. The docstring for that function has: Compute and write the mask of an image based on the grey level This is based on an heuristic proposed by T.Nichols: find the least dense point of the histogram, between fractions m and M of the total image histogram. but I think the actual algorithm is: * Sort values; * Remove m from start and M from end of the sorted vector; * Find the value to value difference; * Use the value corresponding to the position of the largest difference as a threshold. Is that right? Did the algorithm change at some point? Cheers, Matthew From matthew.brett at gmail.com Tue Sep 29 02:18:51 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Sep 2015 17:18:51 -0700 Subject: [Neuroimaging] compute_mask - algorithm changed? In-Reply-To: References: Message-ID: On Mon, Sep 28, 2015 at 5:13 PM, Matthew Brett wrote: > Hi guys, > > One of our post-docs here was just asking about `nipy.labs.compute_mask`. > > The docstring for that function has: > > Compute and write the mask of an image based on the grey level > This is based on an heuristic proposed by T.Nichols: > find the least dense point of the histogram, between fractions > m and M of the total image histogram. > > but I think the actual algorithm is: > > * Sort values; > * Remove m from start and M from end of the sorted vector; > * Find the value to value difference; > * Use the value corresponding to the position of the largest > difference as a threshold. > > Is that right? Did the algorithm change at some point? Ah - sorry - with a moment's reflection, I see that the large difference between the sorted values also represents a local point of low histogram density... Cheers2, Matthew From jbpoline at gmail.com Tue Sep 29 08:52:30 2015 From: jbpoline at gmail.com (JB Poline) Date: Tue, 29 Sep 2015 08:52:30 +0200 Subject: [Neuroimaging] compute_mask - algorithm changed? In-Reply-To: References: Message-ID: Hi, Fyi, I had to tweak the values to get a reasonable answer on some data, but generally works well cheers JB On Tue, Sep 29, 2015 at 2:18 AM, Matthew Brett wrote: > On Mon, Sep 28, 2015 at 5:13 PM, Matthew Brett wrote: >> Hi guys, >> >> One of our post-docs here was just asking about `nipy.labs.compute_mask`. >> >> The docstring for that function has: >> >> Compute and write the mask of an image based on the grey level >> This is based on an heuristic proposed by T.Nichols: >> find the least dense point of the histogram, between fractions >> m and M of the total image histogram. >> >> but I think the actual algorithm is: >> >> * Sort values; >> * Remove m from start and M from end of the sorted vector; >> * Find the value to value difference; >> * Use the value corresponding to the position of the largest >> difference as a threshold. >> >> Is that right? Did the algorithm change at some point? > > Ah - sorry - with a moment's reflection, I see that the large > difference between the sorted values also represents a local point of > low histogram density... > > Cheers2, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging From bertrand.thirion at inria.fr Tue Sep 29 09:19:27 2015 From: bertrand.thirion at inria.fr (bthirion) Date: Tue, 29 Sep 2015 09:19:27 +0200 Subject: [Neuroimaging] compute_mask - algorithm changed? In-Reply-To: References: Message-ID: <560A3B7F.6080301@inria.fr> Hi, No, we're still using the Nichols "anti mode" heuristic: delta = sorted_input[limiteinf + 1:limitesup + 1] \ - sorted_input[limiteinf:limitesup] ia = delta.argmax() threshold = 0.5 * (sorted_input[ia + limiteinf] + sorted_input[ia + limiteinf + 1]) But let me reiterate that this code is *dead*, and that nilearn should be used for this kind of task. Bertrand On 29/09/2015 08:52, JB Poline wrote: > Hi, > > Fyi, I had to tweak the values to get a reasonable answer on some > data, but generally works well > cheers > JB > > On Tue, Sep 29, 2015 at 2:18 AM, Matthew Brett wrote: >> On Mon, Sep 28, 2015 at 5:13 PM, Matthew Brett wrote: >>> Hi guys, >>> >>> One of our post-docs here was just asking about `nipy.labs.compute_mask`. >>> >>> The docstring for that function has: >>> >>> Compute and write the mask of an image based on the grey level >>> This is based on an heuristic proposed by T.Nichols: >>> find the least dense point of the histogram, between fractions >>> m and M of the total image histogram. >>> >>> but I think the actual algorithm is: >>> >>> * Sort values; >>> * Remove m from start and M from end of the sorted vector; >>> * Find the value to value difference; >>> * Use the value corresponding to the position of the largest >>> difference as a threshold. >>> >>> Is that right? Did the algorithm change at some point? >> Ah - sorry - with a moment's reflection, I see that the large >> difference between the sorted values also represents a local point of >> low histogram density... >> >> Cheers2, >> >> Matthew >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging From dimitri.papadopoulos at cea.fr Wed Sep 30 15:35:28 2015 From: dimitri.papadopoulos at cea.fr (Dimitri Papadopoulos Orfanos) Date: Wed, 30 Sep 2015 15:35:28 +0200 Subject: [Neuroimaging] [nibabel] reading corrupt *.nii.gz files Message-ID: <560BE520.9030205@cea.fr> Dear all, The following code emits an exception: import nibabel NIFTI_FILE = 'FLAIR.nii.gz' img = nibabel.load(NIFTI_FILE) import nibabel try: data = img.get_data() except zlib.error as err: print(err) The outptut is: Error -3 while decompressing: invalid code lengths set The reason is that NIfTI file FLAIR.nii.gz is corrupted: $ gunzip gzip: FLAIR.nii.gz: invalid compressed data--format violated $ While in many such situations throwing an exception is the right thing to do, there are cases where I would like to override the error and read whatever data is available. For example it could be useful to be able to display the image the same way FSLView does (see http://www.pictureshack.us/images/10675_FLAIR.png). Also please note that PyNIfTI does read the corrupted file without raising an error. We have hundreds such corrupted FLAIR files, probably due to a bug in older versions of dcm2nii. So here are my questions: * Shouldn't ninabel catch the zlib exception and raise its own (more user-friendly) exception? * Is there a way to avoid the exception and read whatever data is available in the corrupted file? * If there is currently no way to avoid the exception, would it be acceptable to add such an option to nibabel? Best, Dimitri From matthew.brett at gmail.com Wed Sep 30 21:29:58 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 30 Sep 2015 12:29:58 -0700 Subject: [Neuroimaging] [nibabel] reading corrupt *.nii.gz files In-Reply-To: <560BE520.9030205@cea.fr> References: <560BE520.9030205@cea.fr> Message-ID: Hi Dimitri, On Wed, Sep 30, 2015 at 6:35 AM, Dimitri Papadopoulos Orfanos wrote: > Dear all, > > The following code emits an exception: > import nibabel > NIFTI_FILE = 'FLAIR.nii.gz' > img = nibabel.load(NIFTI_FILE) > import nibabel > try: > data = img.get_data() > except zlib.error as err: > print(err) > > The outptut is: > Error -3 while decompressing: invalid code lengths set > > The reason is that NIfTI file FLAIR.nii.gz is corrupted: > $ gunzip > > gzip: FLAIR.nii.gz: invalid compressed data--format violated > $ > > While in many such situations throwing an exception is the right thing > to do, there are cases where I would like to override the error and read > whatever data is available. For example it could be useful to be able to > display the image the same way FSLView does (see > http://www.pictureshack.us/images/10675_FLAIR.png). Also please note > that PyNIfTI does read the corrupted file without raising an error. > > We have hundreds such corrupted FLAIR files, probably due to a bug in > older versions of dcm2nii. > > So here are my questions: > * Shouldn't ninabel catch the zlib exception and raise its own (more > user-friendly) exception? Would the friendly error be much different from the unfriendly one though? I guess it would just be something like 'Error reading compressed image data : Error -3 while decompressing: invalid code lengths set' ? > * Is there a way to avoid the exception and read whatever data is > available in the corrupted file? Not at the moment. > * If there is currently no way to avoid the exception, would it be > acceptable to add such an option to nibabel? Yes, sure. I don't think it should be the default, but as an extra flag to load, it would be fine. Have a look for the implementation of the ``mmap`` option for the general idea: https://github.com/nipy/nibabel/pull/268 See you, Matthew