From gael.varoquaux at normalesup.org Mon May 4 10:46:19 2020 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 4 May 2020 16:46:19 +0200 Subject: [Neuroimaging] Announcing the dev nilearn days: software and science Message-ID: <20200504144619.yfb2czt3xyd7np5w@phare.normalesup.org> Dear friends and colleagues, Nilearn is a Python tool for multivariate statistics on brain images. It has recently gained more statistical models: the next release will also provide tools for build models of task response. https://nilearn.github.io On May 19-22, we are organizing the nilearn dev days, to discuss advanced topics to and shape what's next. https://nilearn.github.io/dev-days-2020/ Our goal is to grow a stronger technical community for statistics on brain images in Python Schedule: * Software development day: 19 May Presentations and discussions will teach the development and open-science principles that underlie nilearn. Their goals are to empower new developers and maintainers. * Scientific day: 22 May Invited talks will illustrate exciting neuroscience that can be done with statistical techniques as those in nilearn. To inspire and sketch the future of nilearn, the talks are not restricted to studies performed using nilearn. The event will be fully open. Details on video-conferencing to be released soon. Software development: 19 May ============================= EDT | CET | Speaker | Presentation Title ------|-------|----------------------|------------------- 09:00 | 15:00 | Ga?l Varoquaux | Introduction and overview 09:15 | 15:15 | Julia Huntenburg | Adding new features to Nilearn 10:00 | 16:00 | Break | Break 10:15 | 16:15 | Bertrand Thirion | Documentation-driven development 11:00 | 17:00 | Ga?l Varoquaux | Open source software: how to live long and go far 11:45 | 17:45 | Break | Break 13:00 | 19:00 | Elizabeth DuPre | GitHub for project maintainers 13:45 | 19:45 | Chris Markiewicz | Testing numerical code in python 14:30 | 20:30 | Break | Break 14:45 | 20:45 | J?r?me Dock?s | Continuous integration for academic software 15:30 | 21:30 | Alejandro de la Vega | The Python ecosystem for neuroimaging EDT = Eastern Daylight Time: New York, Boston, Philadelphia, Montreal... CET = Central European Time: Paris, Rome, Berlin, Madrid... Scientific day: 22 May ======================= EDT | CET | Speaker | Presentation Title ------|-------|-------------------|------------------- 09:00 | 15:00 | Nilearn Team | Introduction 09:10 | 15:10 | Sylvia Villeneuve | Predicting Functional Brain Aging in Preclinical Alzheimer?s Disease 09:50 | 15:50 | Carsen Stringer | Rastermap: A Visualization Tool for High-dimensional Neural Data 10:30 | 16:30 | Break | Break 10:50 | 16:50 | Eva Dyer | Comparing High-Dimensional Neural Recordings Across Time, Space, and Behavior 11:30 | 17:30 | Aki Nikolaidis | Bagging Improves Reproducibility of Functional Parcellation of the Human Brain 12:15 | 18:15 | Break | Break 10:50 | 16:50 | Eva Dyer | Comparing High-Dimensional Neural Recordings Across Time, Space, and Behavior 13:30 | 19:30 | Jo Etzel | Pattern Similarity Analyses of Frontoparietal Task Coding: Individual Variation and Genetic Influences 14:10 | 20:10 | TBA | To Be Announced 14:50 | 20:50 | Nilearn Team | Wrap-up See you all online soon, and stay safe. Best, Ga?l -- Gael Varoquaux Research Director, INRIA Visiting professor, McGill http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From nicolas.pannetier at gmail.com Wed May 6 21:36:35 2020 From: nicolas.pannetier at gmail.com (Nicolas Pannetier) Date: Wed, 6 May 2020 18:36:35 -0700 Subject: [Neuroimaging] Open positions at Darmiyan Message-ID: Dear colleagues, Darmiyan is developing a diagnostic software platform for the early detection, monitoring and stratification of Alzheimer?s disease and is currently hiring a few talented people to grow the team. Specifically, we are looking for a Director of Medical Software Engineering, a Senior Medical Imaging Scientist and a Senior Medical Machine Learning Practitioner. The company is based in San Francisco, compensation is competitive and we are open to hire people remotely. More details on the positions below. If interested, or if you know someone who could be, please reach out to kaveh at darmiyan.com. Many thanks! *Senior Medical Imaging Scientist* You are proficient in Python and have supreme skills in brain MRI image processing with minimum 5 years of industry experience. Formal education in MR physics or computational neuroscience is a plus. You will be responsible to maintain and expand Darmiyan?s image processing pipeline, which will include advanced coding and proper quality control (QC), as well as assuring robustness and proper maintenance of detailed documentation. You will be responsible for maintaining an organized database of brain image files (original and processed images, intermediate files and color maps) and generating detailed analytics such as QC metrics, processing times, and unexpected anomalies. You will extract and summarize the final numerical outputs of the image processing pipeline into standardized tabular data and will hand them off, together with organized processed images, to the machine learning and AI expert(s) for further analysis. *Senior Medical Machine Learning Practitioner* You are proficient in Python and biostatistics and will be responsible for Darmiyan?s core machine learning and deep learning pipelines. Your work will be strictly guided by FDA?s regulatory guidelines for machine learning and AI. You will merge the outputs of the image processing pipeline with the relevant metadata, perform the necessary QC, generate QC metrics and feedback and build and test appropriate learning algorithms and blind tests in compliance with the regulatory guidelines. You will be responsible to report detailed output of these learning models with analytics and data visualization. *Director of Medical Software Engineering* You are a seasoned software engineer and avid coder with long industry experience in medical software development and first-hand experience with the regulatory frameworks surrounding digital healthcare technologies and software as a medical device (SaMD), such as HIPAA, GDPR, FDA guidelines, etc. You are proficient in Python, Docker, and cloud computation, as well as UI/UX and back-end development. You will be responsible for development and deployment of Darmiyan?s product, which will require integration of the core image processing pipeline with the relevant machine learning /AI components and metadata processing modules, within Darmiyan?s established regulatory compliance framework. You will work closely with Darmiyan?s CTO for seamless and agile integration of newly developed MRI feature extraction code scripts into the image processing pipeline, for which you will also develop the necessary APIs and GUIs. You will also be responsible for the maintenance of Darmiyan?s cloud resources and software development cycle. Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From samfmri at gmail.com Fri May 15 21:36:25 2020 From: samfmri at gmail.com (Sam W) Date: Sat, 16 May 2020 03:36:25 +0200 Subject: [Neuroimaging] extract center of mass with nibabel/nilearn/numpy Message-ID: Hello, I have an atlas with voxel numbers between 1 and 40 that correspond to different ROIs. I would like to extract the center of mass of each ROI, is there a way to do this with nibabel/nilearn/numpy? Best regards, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From grandrigo at gmail.com Fri May 15 12:03:33 2020 From: grandrigo at gmail.com (Rodrigo Dennis Perea) Date: Fri, 15 May 2020 12:03:33 -0400 Subject: [Neuroimaging] nibabel.openers() -- >compression level argument information Message-ID: Hi Chris, I am wondering where can I find more information about the "default_compressionlevel" argument in the following website: https://nipy.org/nibabel/reference/nibabel.openers.html If I change the compression, I assume it's always lossless but there a trade between compression time and file size, correct? Thanks in advance, Rodrigo -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat May 16 05:01:21 2020 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 16 May 2020 10:01:21 +0100 Subject: [Neuroimaging] nibabel.openers() -- >compression level argument information In-Reply-To: References: Message-ID: Hi, On Sat, May 16, 2020 at 9:54 AM Rodrigo Dennis Perea wrote: > > Hi Chris, > I am wondering where can I find more information about the "default_compressionlevel" argument in the following website: > > https://nipy.org/nibabel/reference/nibabel.openers.html > > If I change the compression, I assume it's always lossless but there a trade between compression time and file size, correct? > > Thanks in advance, > Rodrigo Right - there's a bit more information at https://docs.python.org/3/library/gzip.html#gzip.GzipFile. There's some discussion of gzip compression levels here: https://stackoverflow.com/questions/28452429/does-gzip-compression-level-have-any-impact-on-decompression Cheers, Matthew From markiewicz at stanford.edu Sat May 16 08:30:31 2020 From: markiewicz at stanford.edu (Christopher Markiewicz) Date: Sat, 16 May 2020 12:30:31 +0000 Subject: [Neuroimaging] extract center of mass with nibabel/nilearn/numpy In-Reply-To: References: Message-ID: <4aea92c2-3b41-44d0-8317-314d1707bad9@email.android.com> Assuming that you're using standard center of mass, and are content with them sometimes lying outside the region if it is nonconvex, then it should be pretty simple: Retrieve the indices for a region with numpy.where and average in each dimension. That will give you the index (though possibly non integral). If you need it in world coordinates, nibabel's apply_affine function can be passed the atlas affine and COM index. Best, Chris On May 15, 2020 21:36, Sam W wrote: Hello, I have an atlas with voxel numbers between 1 and 40 that correspond to different ROIs. I would like to extract the center of mass of each ROI, is there a way to do this with nibabel/nilearn/numpy? Best regards, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat May 16 09:11:31 2020 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 16 May 2020 15:11:31 +0200 Subject: [Neuroimaging] extract center of mass with nibabel/nilearn/numpy In-Reply-To: <4aea92c2-3b41-44d0-8317-314d1707bad9@email.android.com> References: <4aea92c2-3b41-44d0-8317-314d1707bad9@email.android.com> Message-ID: <20200516131131.3ijijs2tub3i7y7o@phare.normalesup.org> https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.measurements.center_of_mass.html is a useful function to do such a thing. G On Sat, May 16, 2020 at 12:30:31PM +0000, Christopher Markiewicz wrote: > Assuming that you're using standard center of mass, and are content with them > sometimes lying outside the region if it is nonconvex, then it should be pretty > simple: > Retrieve the indices for a region with numpy.where and average in each > dimension. That will give you the index (though possibly non integral). If you > need it in world coordinates, nibabel's apply_affine function can be passed the > atlas affine and COM index. > Best, > Chris > On May 15, 2020 21:36, Sam W wrote: > Hello, > I have an atlas with voxel numbers between 1 and 40 that correspond to > different ROIs. I would like to extract the center of mass of each ROI, is > there a way to do this with nibabel/nilearn/numpy? > Best regards, > Sam > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Research Director, INRIA Visiting professor, McGill http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From lewis.dunne00 at gmail.com Wed May 20 04:49:34 2020 From: lewis.dunne00 at gmail.com (lewis.dunne00 at gmail.com) Date: Wed, 20 May 2020 09:49:34 +0100 Subject: [Neuroimaging] ER-design task classification with nilearn Message-ID: <5ec4ef21.1c69fb81.ebe04.8c5b@mx.google.com> An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Wed May 20 16:41:32 2020 From: bertrand.thirion at inria.fr (bthirion) Date: Wed, 20 May 2020 22:41:32 +0200 Subject: [Neuroimaging] ER-design task classification with nilearn In-Reply-To: <5ec4ef21.1c69fb81.ebe04.8c5b@mx.google.com> References: <5ec4ef21.1c69fb81.ebe04.8c5b@mx.google.com> Message-ID: Hi Lewis, This kind of question would better go to Neurostars. IIUC, you have put in the dataframe both the target variable (the class), often denoted "y", and the voxel data ("X"). So you don't need to apply a Masker any more. The only thing is to extract X and y from your dataFrame, and give that to scikit-learn, together with a correct cross-validation scheme. Does that make sense ? Best, Bertrand On 20/05/2020 10:49, lewis.dunne00 at gmail.com wrote: > > Hello. I'm trying to follow the tutorial on Nilearn for decoding > , > only using my own data which is event related. I used SPM to create > unique beta images for each trial in my dataset as recommended. But > since this is a different approach from the Haxby dataset I'm a little > unclear on how to follow along now. I wonder if I could get some > feedback on what I have done so far. > > I created a dataframe with the trial event labels and paired each > trial with its respective beta image. I did this by reading in each > beta file and masking it with a previously made 6mm ROI (not quite > sure how to make an anatomical mask yet), which left me with a 1x120 > array where 120 is the number of voxels for that trial event within > the mask. I don't know if this is correct... Since I have 82 events in > this task, I created an 82x120 matrix containing all of these masked > beta images as rows, voxel data as columns, and concatenated this onto > my dataframe containing trial-by-trial task info. My plan was to then > use these 120 voxels as features and my experimental condition column > (it's just binary data) as the labels. > > The problem is I get to the NiftiMasker and receive this warning: > > `UserWarning: Standardization of 3D signal has been requested but > would lead to zero values. Skipping. > > ? warnings.warn('Standardization of 3D signal has been requested but > '` ? not a typo > > I think this has to do with the fact that these are beta images for > events rather than an entire block, so there is no time dimension? But > I'm not sure because I am quite unfamiliar with this. For example in > the tutorial it says that the variable "fmri_masked" is an array with > shape `time x voxels`. The array that I get is of course 1 x 120, > which makes sense given the event-related design, but the tutorial > doesn't mention anything about this. > > I think I've misunderstood something. What should I really be doing > differently in the case of an event-related design with trial-wise > betas? Feedback would be greatly appreciated! > > Best > > L > > Sent from Mail for > Windows 10 > > > > Virus-free. www.avast.com > > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From rolandomasisobando at gmail.com Wed May 20 21:42:01 2020 From: rolandomasisobando at gmail.com (Rolando Masis-Obando) Date: Wed, 20 May 2020 21:42:01 -0400 Subject: [Neuroimaging] [PySurfer] How to use different surface meshes in PySurfer? Message-ID: Hi everyone (or whoever reads this!), I hope you are doing well during these chaotic times. I'm trying to figure out a way to use alternative *fsaverage6 *surface meshes in PySurfer. In other words, how can I use mesh that doesn't come by default in freesurfer. I have some fsaverage6 brains at different inflations in SUMA, but I would like to use them in PySurfer. Is there an easy way to do this? I'm also not sure that the SUMA meshes are compatible with those of PySurfer (they are giftis). If there is a way, what would you recommend? Thanks in advance for your help! Best, Rolando -------------- next part -------------- An HTML attachment was scrubbed... URL: From larson.eric.d at gmail.com Thu May 21 12:20:48 2020 From: larson.eric.d at gmail.com (Eric Larson) Date: Thu, 21 May 2020 12:20:48 -0400 Subject: [Neuroimaging] [PySurfer] How to use different surface meshes in PySurfer? In-Reply-To: References: Message-ID: > > I'm trying to figure out a way to use alternative *fsaverage6 *surface > meshes in PySurfer. In other words, how can I use mesh that doesn't come by > default in freesurfer. I have some fsaverage6 brains at different > inflations in SUMA, but I would like to use them in PySurfer. Is there an > easy way to do this? > If you name the meshes `fsaverage/surf/lh.whatever` and `rh.whatever`, you might be able to use `surf='whatever'` in PySurfer. Not sure if we sanity check the names, or just check to see if the files exist, but it's worth a shot. > I'm also not sure that the SUMA meshes are compatible with those of > PySurfer (they are giftis). If there is a way, what would you recommend? > They need to preserve the vertex ordering, otherwise plotting will not work. I would first check to see if there are the same number of vertices, if they are, then try plotting some data on the builtin meshes, then on yours. Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at incf.org Fri May 22 12:25:49 2020 From: helena at incf.org (Helena Ledmyr) Date: Fri, 22 May 2020 18:25:49 +0200 Subject: [Neuroimaging] Community review of two new neuroscience standards now open Message-ID: INCF has two new community standards up for public review, and we are now soliciting input from the user community. The standards are the Neuroscience Information Exchange (NIX) and the Neuroimaging Data Model (NIDM)-Results. More information is available here: https://www.incf.org/comment-on-these-sbps *The mission of the INCF is to develop, evaluate, and endorse standards and best practices that embrace the principles of Open, FAIR, and Citable neuroscience. INCF also provides training on how standards and best practices facilitate reproducibility and enables the publishing of the entirety of research output, including data and code.* All the best, Helena ----------------------------- Helena Ledmyr, PhD *Director* *Development and Communications* International Neuroinformatics Coordinating Facility Secretariat Karolinska Institutet. Nobels v?g 15A, SE-171 77 Stockholm. Sweden Email: helena.ledmyr at incf.org Phone: +46 8 524 870 35 incf.org neuroinformatics.incf.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From rolandomasisobando at gmail.com Sun May 24 19:17:21 2020 From: rolandomasisobando at gmail.com (Rolando Masis-Obando) Date: Sun, 24 May 2020 19:17:21 -0400 Subject: [Neuroimaging] [PySurfer] How to use different surface meshes in PySurfer? In-Reply-To: References: Message-ID: Hey Eric, Thanks for responding! So, unfortunately, that doesn't work. It might be because the file that works in SUMA is a gifti. Not sure though. [image: image.png] Do you have any other suggestions? Is there a way to create surface brains at different inflations that's freesurfer compatible? Thanks again!! Best, Rolando On Thu, May 21, 2020 at 12:21 PM Eric Larson wrote: > I'm trying to figure out a way to use alternative *fsaverage6 *surface >> meshes in PySurfer. In other words, how can I use mesh that doesn't come by >> default in freesurfer. I have some fsaverage6 brains at different >> inflations in SUMA, but I would like to use them in PySurfer. Is there an >> easy way to do this? >> > > If you name the meshes `fsaverage/surf/lh.whatever` and `rh.whatever`, you > might be able to use `surf='whatever'` in PySurfer. Not sure if we sanity > check the names, or just check to see if the files exist, but it's worth a > shot. > > >> I'm also not sure that the SUMA meshes are compatible with those of >> PySurfer (they are giftis). If there is a way, what would you recommend? >> > > They need to preserve the vertex ordering, otherwise plotting will not > work. I would first check to see if there are the same number of vertices, > if they are, then try plotting some data on the builtin meshes, then on > yours. > > Eric > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 69882 bytes Desc: not available URL: From alexandre.gramfort at inria.fr Mon May 25 02:55:45 2020 From: alexandre.gramfort at inria.fr (Alexandre Gramfort) Date: Mon, 25 May 2020 08:55:45 +0200 Subject: [Neuroimaging] [PySurfer] How to use different surface meshes in PySurfer? In-Reply-To: References: Message-ID: > > see https://surfer.nmr.mgh.harvard.edu/fswiki/mris_inflate > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rolandomasisobando at gmail.com Mon May 25 14:11:51 2020 From: rolandomasisobando at gmail.com (Rolando Masis-Obando) Date: Mon, 25 May 2020 14:11:51 -0400 Subject: [Neuroimaging] [PySurfer] How to use different surface meshes in PySurfer? In-Reply-To: References: Message-ID: Hi Alexandre and Eric, Thank you so much to you both! For future researchers interested in using different inflations, these are the steps I took: *0. My OS:* Catalina 10.15.4 *1. install freesurfer:* download freesurfer (I used the pkg installer for freesurfer 7.1.0): https://surfer.nmr.mgh.harvard.edu/fswiki/rel7downloads follow installation instructions: https://surfer.nmr.mgh.harvard.edu/fswiki/MacOsInstall#SetupandConfiguration *2. in terminal:* export FREESURFER_HOME=/Applications/freesurfer/7.1.0 export SUBJECTS_DIR=$FREESURFER_HOME/subjects source $FREESURFER_HOME/SetUpFreeSurfer.sh *3. fix permissions in terminal (*otherwise mris_inflate won't be able to read or write from other freesurfer directories*):* sudo chmod -R a+w $FREESURFER_HOME/subjects/ *4. navigate to surface folder in terminal:* cd /Applications/freesurfer/7.1.0/subjects/fsaverage6/surf *5. Run mris_inflate function on both hemispheres *(in my case, I'm going to be using iterations n=5) mris_inflate -n 5 lh.smoothwm lh.5_inflated mris_inflate -n 5 rh.smoothwm rh.5_inflated *6. call new inflated brain with pysurfer using python (for me, inside a jupyter notebook)* subject_id = 'fsaverage6' hemi = 'split' surf = '5_inflated' brain = Brain(subject_id, hemi, surf, cortex='low_contrast',views=['lat', 'med'],background='gray') *7. Celebrate!* Thanks again for your help. Best, Rolando On Mon, May 25, 2020 at 2:56 AM Alexandre Gramfort < alexandre.gramfort at inria.fr> wrote: > see https://surfer.nmr.mgh.harvard.edu/fswiki/mris_inflate >> >> _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From samfmri at gmail.com Thu May 28 12:42:22 2020 From: samfmri at gmail.com (Sam W) Date: Thu, 28 May 2020 18:42:22 +0200 Subject: [Neuroimaging] covariance estimator in nilearn Message-ID: Hello! I see that ConnectivityMeasure() uses the LedoitWolf shrinkage by default. I've been reading about shrinkage but it seems it's mostly explained in the context of ridge regression, when there is more than one coefficient in the model. If I'm simply interested in the correlation between two time series, why would shrinkage still be important? Wouldn't the correlation coefficient between the two time series (np.corrcoef(TS1,TS2)) provide the best estimation of the relationship between them? Also is it true that correlations with shrinkage estimator like LedoitWolf will always be weaker than using the Maximum Likelihood Estimator? Thank you! Best regards, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Thu May 28 13:22:39 2020 From: bertrand.thirion at inria.fr (bthirion) Date: Thu, 28 May 2020 19:22:39 +0200 Subject: [Neuroimaging] covariance estimator in nilearn In-Reply-To: References: Message-ID: <7b493bbc-346d-5e4b-e05c-327d374bd8ad@inria.fr> Hi, Please post this type of question on Neurostars. LW is meant to improve covariance estimation (in the least-squares sense, see the paper of Ledoit and Wolf), so for many tasks you want to achieve, it is a rather good idea to use it. Indeed this weakens the correlations values (downward bias), but IMHO these values alone do not make sense: what matters are correlations differences across subjects, conditions etc. HTH, Bertrand On 28/05/2020 18:42, Sam W wrote: > Hello! > I see that ConnectivityMeasure() uses the LedoitWolf shrinkage by > default. I've been reading about shrinkage but it seems it's mostly > explained in the context of ridge regression, when there is more than > one coefficient in the model. > If I'm simply interested in the correlation between two time series, > why would shrinkage still be important? Wouldn't the correlation > coefficient between the two time series (np.corrcoef(TS1,TS2)) provide > the best estimation of the relationship between them? > Also is it true that correlations with shrinkage estimator like > LedoitWolf will always be weaker than using the Maximum Likelihood > Estimator? > Thank you! > Best regards, > Sam > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From samfmri at gmail.com Fri May 29 00:01:18 2020 From: samfmri at gmail.com (Sam W) Date: Fri, 29 May 2020 06:01:18 +0200 Subject: [Neuroimaging] covariance estimator in nilearn In-Reply-To: <7b493bbc-346d-5e4b-e05c-327d374bd8ad@inria.fr> References: <7b493bbc-346d-5e4b-e05c-327d374bd8ad@inria.fr> Message-ID: Hi Bertrand, Thank you for your reply. >LW is meant to improve covariance estimation (in the least-squares >sense, see the paper of Ledoit and Wolf), so for many tasks you want to >achieve, it is a rather good idea to use it. I understand that shrinkage is a good idea for calculating things like partial correlations with many ROIs. My question was rather what advantage does shrinkage bring when you compute the (pearson) correlation between only 2 time series. Is shrinkage still relevant in that case? Best regards, Sam On Thu, May 28, 2020 at 7:23 PM bthirion wrote: > Hi, > > Please post this type of question on Neurostars. > > LW is meant to improve covariance estimation (in the least-squares sense, > see the paper of Ledoit and Wolf), so for many tasks you want to achieve, > it is a rather good idea to use it. > Indeed this weakens the correlations values (downward bias), but IMHO > these values alone do not make sense: what matters are correlations > differences across subjects, conditions etc. > HTH, > Bertrand > > > On 28/05/2020 18:42, Sam W wrote: > > Hello! > I see that ConnectivityMeasure() uses the LedoitWolf shrinkage by default. > I've been reading about shrinkage but it seems it's mostly explained in the > context of ridge regression, when there is more than one coefficient in the > model. > If I'm simply interested in the correlation between two time series, why > would shrinkage still be important? Wouldn't the correlation coefficient > between the two time series (np.corrcoef(TS1,TS2)) provide the best > estimation of the relationship between them? > Also is it true that correlations with shrinkage estimator like LedoitWolf > will always be weaker than using the Maximum Likelihood Estimator? > Thank you! > Best regards, > Sam > > _______________________________________________ > Neuroimaging mailing listNeuroimaging at python.orghttps://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri May 29 11:04:44 2020 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 29 May 2020 17:04:44 +0200 Subject: [Neuroimaging] covariance estimator in nilearn In-Reply-To: References: <7b493bbc-346d-5e4b-e05c-327d374bd8ad@inria.fr> Message-ID: <20200529150444.i6kvb7pvxgov6hzo@phare.normalesup.org> For lay-person references on shrinkage: * Stein shrinkage https://pdfs.semanticscholar.org/26c0/98a24a8e8039219dca341a74d7ddb2419cb6.pdf * Covariance shrinkage https://jpm.pm-research.com/content/30/4/110 These are different settings than covariance for fMRI, however the message is the same: shrunk estimates are better estimates to use for a analysis or to make a decision. Ga?l On Fri, May 29, 2020 at 06:01:18AM +0200, Sam W wrote: > Hi Bertrand, > Thank you for your reply. > >LW is meant to improve covariance estimation (in the least-squares > >sense, see the paper of Ledoit and Wolf), so for many tasks you want to > >achieve, it is a rather good idea to use it. > I understand that shrinkage is a good idea for calculating things like partial > correlations with many ROIs. My question was rather what advantage does > shrinkage bring when you compute the (pearson) correlation between only 2 time > series. Is shrinkage still relevant in that case? > Best regards, > Sam > On Thu, May 28, 2020 at 7:23 PM bthirion wrote: > Hi, > Please post this type of question on Neurostars. > LW is meant to improve covariance estimation (in the least-squares sense, > see the paper of Ledoit and Wolf), so for many tasks you want to achieve, > it is a rather good idea to use it. > Indeed this weakens the correlations values (downward bias), but IMHO these > values alone do not make sense: what matters are correlations differences > across subjects, conditions etc. > HTH, > Bertrand > On 28/05/2020 18:42, Sam W wrote: > Hello! > I see that ConnectivityMeasure() uses the LedoitWolf shrinkage by > default. I've been reading about shrinkage but it seems it's mostly > explained in the context of ridge regression, when there is more than > one coefficient in the model. > If I'm simply interested in the correlation between two time series, > why would shrinkage still be important? Wouldn't the correlation > coefficient between the two time series (np.corrcoef(TS1,TS2)) provide > the best estimation of the relationship between them? > Also is it true that correlations with shrinkage estimator like > LedoitWolf will always be weaker than using the Maximum Likelihood > Estimator? > Thank you! > Best regards, > Sam > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Research Director, INRIA Visiting professor, McGill http://gael-varoquaux.info http://twitter.com/GaelVaroquaux