From satra at mit.edu Wed Dec 6 18:11:45 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 6 Dec 2017 18:11:45 -0500 Subject: [Neuroimaging] Code Rodeo in Austin Jan 15 - 19 Message-ID: The NeuroStuff consortium (which is just as made-up as it sounds) is delighted to announce Code Rodeo 2018 (an actual event), to be held January 15 - 19 in Austin: Details and registration link: https://www.eventbrite.com/e/code-rodeo-2018-tickets-40111781418 We very much welcome other Nipy projects to join us. -------------- next part -------------- An HTML attachment was scrubbed... URL: From suprajasankari at gmail.com Thu Dec 7 07:42:37 2017 From: suprajasankari at gmail.com (Supraja Jayakumar) Date: Thu, 7 Dec 2017 21:42:37 +0900 Subject: [Neuroimaging] Segmentation of .nii file using WatershedSkullStrip() Message-ID: Hi. I am trying to use the above API for segmenting .nii.gz (should i extract it first?) files from the ABIDE datasets into white and grey matter. After calling: skullstrip.cmdline 'mri_watershed -T1 transforms/talairach_with_skull.lta T1.mgz brainmask.auto.mgz' The full code is here: > > from nipype.interfaces.freesurfer import WatershedSkullStrip > skullstrip = WatershedSkullStrip() > skullstrip.inputs.in_file = > '/home/venkat/connectomes/datasets/abide_dparsf/ABIDE_pcp/dparsf/nofilt_noglobal/NYU_0050952_func_preproc.nii.gz' > skullstrip.inputs.t1 = True > skullstrip.inputs.transform = "transforms/talairach_with_skull.lta" > skullstrip.inputs.out_file = "brainmask.auto.mgz" > skullstrip.cmdline > 'mri_watershed -T1 transforms/talairach_with_skull.lta T1.mgz > brainmask.auto.mgz' > # skullstrip.help() I do not see any segmented images generated in the given out_file location. I suppose i am not doing this the right way. Pls point out what's wrong in the above code. Thanks S.Venkat -- U -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Sat Dec 9 16:23:54 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Sat, 9 Dec 2017 16:23:54 -0500 Subject: [Neuroimaging] Segmentation of .nii file using WatershedSkullStrip() In-Reply-To: References: Message-ID: hi, in order to actually execute the command you will need to call skullstrip.run(). you may need to set some other parameters to watershed to get labeled output. you can also consider other routines like FAST from FSL and AntsCorticalThickness from ANTS. cheers, satra On Thu, Dec 7, 2017 at 7:42 AM, Supraja Jayakumar wrote: > Hi. > > I am trying to use the above API for segmenting .nii.gz (should i extract > it first?) files from the ABIDE datasets into white and grey matter. After > calling: > > skullstrip.cmdline > 'mri_watershed -T1 transforms/talairach_with_skull.lta T1.mgz > brainmask.auto.mgz' > > The full code is here: > >> >> from nipype.interfaces.freesurfer import WatershedSkullStrip >> skullstrip = WatershedSkullStrip() >> skullstrip.inputs.in_file = '/home/venkat/connectomes/ >> datasets/abide_dparsf/ABIDE_pcp/dparsf/nofilt_noglobal/ >> NYU_0050952_func_preproc.nii.gz' >> skullstrip.inputs.t1 = True >> skullstrip.inputs.transform = "transforms/talairach_with_skull.lta" >> skullstrip.inputs.out_file = "brainmask.auto.mgz" >> skullstrip.cmdline >> 'mri_watershed -T1 transforms/talairach_with_skull.lta T1.mgz >> brainmask.auto.mgz' >> # skullstrip.help() > > > > I do not see any segmented images generated in the given out_file > location. I suppose i am not doing this the right way. Pls point out what's > wrong in the above code. > > Thanks > S.Venkat > > -- > U > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From suprajasankari at gmail.com Tue Dec 12 02:56:32 2017 From: suprajasankari at gmail.com (Supraja Jayakumar) Date: Tue, 12 Dec 2017 16:56:32 +0900 Subject: [Neuroimaging] Segmentation of .nii file using WatershedSkullStrip() In-Reply-To: References: Message-ID: Satra Thanks for the input. I am using FAST FSL routine although i am wondering if it's possible to process multiple anatomical images at once. I think processing one image takes 10-20 minutes. Is this usually done on the cloud ? Thanks On Sun, Dec 10, 2017 at 6:23 AM, Satrajit Ghosh wrote: > hi, > > in order to actually execute the command you will need to call > skullstrip.run(). you may need to set some other parameters to watershed to > get labeled output. you can also consider other routines like FAST from FSL > and AntsCorticalThickness from ANTS. > > cheers, > > satra > > On Thu, Dec 7, 2017 at 7:42 AM, Supraja Jayakumar < > suprajasankari at gmail.com> wrote: > >> Hi. >> >> I am trying to use the above API for segmenting .nii.gz (should i extract >> it first?) files from the ABIDE datasets into white and grey matter. After >> calling: >> >> skullstrip.cmdline >> 'mri_watershed -T1 transforms/talairach_with_skull.lta T1.mgz >> brainmask.auto.mgz' >> >> The full code is here: >> >>> >>> from nipype.interfaces.freesurfer import WatershedSkullStrip >>> skullstrip = WatershedSkullStrip() >>> skullstrip.inputs.in_file = '/home/venkat/connectomes/data >>> sets/abide_dparsf/ABIDE_pcp/dparsf/nofilt_noglobal/NYU_ >>> 0050952_func_preproc.nii.gz' >>> skullstrip.inputs.t1 = True >>> skullstrip.inputs.transform = "transforms/talairach_with_skull.lta" >>> skullstrip.inputs.out_file = "brainmask.auto.mgz" >>> skullstrip.cmdline >>> 'mri_watershed -T1 transforms/talairach_with_skull.lta T1.mgz >>> brainmask.auto.mgz' >>> # skullstrip.help() >> >> >> >> I do not see any segmented images generated in the given out_file >> location. I suppose i am not doing this the right way. Pls point out what's >> wrong in the above code. >> >> Thanks >> S.Venkat >> >> -- >> U >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- U -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibmalone at gmail.com Tue Dec 12 06:28:04 2017 From: ibmalone at gmail.com (Ian Malone) Date: Tue, 12 Dec 2017 11:28:04 +0000 Subject: [Neuroimaging] nipype node, execution and original inputs difference? Message-ID: Hi, I'm trying to debug a nipype problem with the inputs to a function node and wondered if anyone could help. The function is meant to merge two diffusion datasets together, so takes lists of b vector and value files and the images themselves. If there's only one set of inputs it just passes them through. So far so good. Under every circumstance up to now it's worked as expected, but I've run in a case where with two inputs it behaves as if there's only one set. The report.rst file for the node tells me that's exactly what's happening: Original Inputs --------------- * in_bvals : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmiABCD.bval', '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.bval'] * in_bvecs : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmiABCD.bvec', '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.bvec'] * in_dwis : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmiABCD.nii.gz', '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.nii.gz'] Execution Inputs ---------------- * in_bvals : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.bval'] * in_bvecs : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.bvec'] * in_dwis : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.nii.gz'] It appears the node inputs are set correctly, but are getting stripped or truncated before execution. The same pipeline behaving normally doesn't do this: Original Inputs --------------- * in_bvals : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bval', '/var/lib/midas/data/lha1946/images/nii/original/3024.bval'] * in_bvecs : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bvec', '/var/lib/midas/data/lha1946/images/nii/original/3024.bvec'] * in_dwis : ['/var/lib/midas/data/lha1946/images/nii/original/3033.nii.gz', '/var/lib/midas/data/lha1946/images/nii/original/3024.nii.gz'] Execution Inputs ---------------- * in_bvals : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bval', '/var/lib/midas/data/lha1946/images/nii/original/3024.bval'] * in_bvecs : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bvec', '/var/lib/midas/data/lha1946/images/nii/original/3024.bvec'] * in_dwis : ['/var/lib/midas/data/lha1946/images/nii/original/3033.nii.gz', '/var/lib/midas/data/lha1946/images/nii/original/3024.nii.gz'] I don't know what can cause the execution inputs to differ from the initial inputs. The node and its input connections are just defined like this: merge_initial_dwis = pe.Node(interface=niu.Function( input_names=['in_dwis', 'in_bvals', 'in_bvecs'], output_names=['out_dwis', 'out_bvals', 'out_bvecs', 'out_orig_file', 'out_orig_ind'], function=merge_dwi_function), name='merge_initial_dwis') merge_initial_dwis.inputs.in_dwis = [os.path.abspath(f) for f in args.dwis] merge_initial_dwis.inputs.in_bvals = [os.path.abspath(f) for f in args.bvals] merge_initial_dwis.inputs.in_bvecs = [os.path.abspath(f) for f in args.bvecs] Any suggestions how I could try to find the cause? -- imalone From satra at mit.edu Tue Dec 12 07:50:46 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Tue, 12 Dec 2017 07:50:46 -0500 Subject: [Neuroimaging] nipype node, execution and original inputs difference? In-Reply-To: References: Message-ID: hi ian, that's very surprising. one scenario would be if this node was set as a MapNode, but it doesn't look like it. a way to debug this would be to: import os from nipype.utils.filemanip import loadpkl node_wd = '/path/to/node/wd' node = loadpkl(os.path.join(node_wd, '_node.pklz') print(node.inputs) node.base_dir = '/somepath' # otherwise it will overwrite node_wd result = node.run() there is also an _inputs.pklz in node_wd which shows the inputs to the node. feel free to open an issue on github where this can be tracked. cheers, satra On Tue, Dec 12, 2017 at 6:28 AM, Ian Malone wrote: > Hi, > > I'm trying to debug a nipype problem with the inputs to a function > node and wondered if anyone could help. > > The function is meant to merge two diffusion datasets together, so > takes lists of b vector and value files and the images themselves. If > there's only one set of inputs it just passes them through. So far so > good. Under every circumstance up to now it's worked as expected, but > I've run in a case where with two inputs it behaves as if there's only > one set. > > The report.rst file for the node tells me that's exactly what's happening: > > Original Inputs > --------------- > > * in_bvals : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmiABCD.bval', > '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmi.bval'] > * in_bvecs : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmiABCD.bvec', > '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmi.bvec'] > * in_dwis : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmiABCD.nii.gz', > '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmi.nii.gz'] > > > Execution Inputs > ---------------- > > * in_bvals : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmi.bval'] > * in_bvecs : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmi.bvec'] > * in_dwis : ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ > ep2ddiffFREE681p2FAD25mmi.nii.gz'] > > > It appears the node inputs are set correctly, but are getting stripped > or truncated before execution. The same pipeline behaving normally > doesn't do this: > > > Original Inputs > --------------- > > * in_bvals : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bval', > '/var/lib/midas/data/lha1946/images/nii/original/3024.bval'] > * in_bvecs : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bvec', > '/var/lib/midas/data/lha1946/images/nii/original/3024.bvec'] > * in_dwis : ['/var/lib/midas/data/lha1946/images/nii/original/3033.nii. > gz', > '/var/lib/midas/data/lha1946/images/nii/original/3024.nii.gz'] > > > Execution Inputs > ---------------- > > * in_bvals : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bval', > '/var/lib/midas/data/lha1946/images/nii/original/3024.bval'] > * in_bvecs : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bvec', > '/var/lib/midas/data/lha1946/images/nii/original/3024.bvec'] > * in_dwis : ['/var/lib/midas/data/lha1946/images/nii/original/3033.nii. > gz', > '/var/lib/midas/data/lha1946/images/nii/original/3024.nii.gz'] > > > I don't know what can cause the execution inputs to differ from the > initial inputs. The node and its input connections are just defined > like this: > > merge_initial_dwis = pe.Node(interface=niu.Function( > input_names=['in_dwis', 'in_bvals', 'in_bvecs'], > output_names=['out_dwis', 'out_bvals', 'out_bvecs', 'out_orig_file', > 'out_orig_ind'], > function=merge_dwi_function), name='merge_initial_dwis') > > merge_initial_dwis.inputs.in_dwis = [os.path.abspath(f) for f in > args.dwis] > merge_initial_dwis.inputs.in_bvals = [os.path.abspath(f) for f in > args.bvals] > merge_initial_dwis.inputs.in_bvecs = [os.path.abspath(f) for f in > args.bvecs] > > Any suggestions how I could try to find the cause? > > > -- > imalone > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibmalone at gmail.com Tue Dec 12 08:26:00 2017 From: ibmalone at gmail.com (Ian Malone) Date: Tue, 12 Dec 2017 13:26:00 +0000 Subject: [Neuroimaging] nipype node, execution and original inputs difference? In-Reply-To: References: Message-ID: Thanks, that's a really helpful step to know. It's not a nipype problem then, I was able to track it down to an error in the wrapped function from outside nipype. (On an error condition it threw away one of the data sets, popping them off the input list in the process, which I suppose is how they came to disappear from the execution inputs list. The original author decided a printed error message would be sufficient warning for this circumstance.) Thanks again for your help, Ian On 12 December 2017 at 12:50, Satrajit Ghosh wrote: > hi ian, > > that's very surprising. one scenario would be if this node was set as a > MapNode, but it doesn't look like it. > > a way to debug this would be to: > > import os > from nipype.utils.filemanip import loadpkl > > node_wd = '/path/to/node/wd' > node = loadpkl(os.path.join(node_wd, '_node.pklz') > print(node.inputs) > node.base_dir = '/somepath' # otherwise it will overwrite node_wd > result = node.run() > > there is also an _inputs.pklz in node_wd which shows the inputs to the node. > > feel free to open an issue on github where this can be tracked. > > cheers, > > satra > > On Tue, Dec 12, 2017 at 6:28 AM, Ian Malone wrote: >> >> Hi, >> >> I'm trying to debug a nipype problem with the inputs to a function >> node and wondered if anyone could help. >> >> The function is meant to merge two diffusion datasets together, so >> takes lists of b vector and value files and the images themselves. If >> there's only one set of inputs it just passes them through. So far so >> good. Under every circumstance up to now it's worked as expected, but >> I've run in a case where with two inputs it behaves as if there's only >> one set. >> >> The report.rst file for the node tells me that's exactly what's happening: >> >> Original Inputs >> --------------- >> >> * in_bvals : >> ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmiABCD.bval', >> >> '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.bval'] >> * in_bvecs : >> ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmiABCD.bvec', >> >> '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.bvec'] >> * in_dwis : >> ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmiABCD.nii.gz', >> >> '/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.nii.gz'] >> >> >> Execution Inputs >> ---------------- >> >> * in_bvals : >> ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.bval'] >> * in_bvecs : >> ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.bvec'] >> * in_dwis : >> ['/var/drc/research/tess/dtipipeline/01-0335-08-03-MR00/ep2ddiffFREE681p2FAD25mmi.nii.gz'] >> >> >> It appears the node inputs are set correctly, but are getting stripped >> or truncated before execution. The same pipeline behaving normally >> doesn't do this: >> >> >> Original Inputs >> --------------- >> >> * in_bvals : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bval', >> '/var/lib/midas/data/lha1946/images/nii/original/3024.bval'] >> * in_bvecs : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bvec', >> '/var/lib/midas/data/lha1946/images/nii/original/3024.bvec'] >> * in_dwis : >> ['/var/lib/midas/data/lha1946/images/nii/original/3033.nii.gz', >> '/var/lib/midas/data/lha1946/images/nii/original/3024.nii.gz'] >> >> >> Execution Inputs >> ---------------- >> >> * in_bvals : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bval', >> '/var/lib/midas/data/lha1946/images/nii/original/3024.bval'] >> * in_bvecs : ['/var/lib/midas/data/lha1946/images/nii/original/3033.bvec', >> '/var/lib/midas/data/lha1946/images/nii/original/3024.bvec'] >> * in_dwis : >> ['/var/lib/midas/data/lha1946/images/nii/original/3033.nii.gz', >> '/var/lib/midas/data/lha1946/images/nii/original/3024.nii.gz'] >> >> >> I don't know what can cause the execution inputs to differ from the >> initial inputs. The node and its input connections are just defined >> like this: >> >> merge_initial_dwis = pe.Node(interface=niu.Function( >> input_names=['in_dwis', 'in_bvals', 'in_bvecs'], >> output_names=['out_dwis', 'out_bvals', 'out_bvecs', 'out_orig_file', >> 'out_orig_ind'], >> function=merge_dwi_function), name='merge_initial_dwis') >> >> merge_initial_dwis.inputs.in_dwis = [os.path.abspath(f) for f in >> args.dwis] >> merge_initial_dwis.inputs.in_bvals = [os.path.abspath(f) for f in >> args.bvals] >> merge_initial_dwis.inputs.in_bvecs = [os.path.abspath(f) for f in >> args.bvecs] >> >> Any suggestions how I could try to find the cause? >> >> >> -- >> imalone >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- imalone http://ibmalone.blogspot.co.uk From spiropan at gmail.com Sun Dec 17 02:34:33 2017 From: spiropan at gmail.com (Spiro Pantazatos) Date: Sun, 17 Dec 2017 02:34:33 -0500 Subject: [Neuroimaging] Fwd: Position in computational neuroimaging/neuroinformatics at New York State Psychiatric Institute/Columbia University, Dept of Psychiatry (New York, NY) In-Reply-To: References: Message-ID: FYI *Description *The Molecular Imaging and Neuropathology Division at New York State Psychiatric Institute is seeking a data scientist under the supervision and mentorship of Dr. Spiro Pantazatos. The data scientist will have the opportunity to work among diverse investigators and data analysts across multiple disciplines within the Division as well as at Columbia Medical Center and Columbia University (e.g., in neuroscience, psychology, psychiatry, statistical genetics, bioinformatics), gain training from and train others (faculty, postdoctoral fellows, graduate students) in conventional and new analysis techniques, attend training and educational workshops at the University and elsewhere in the New York City, and co-author publications. *Duties and Responsibilities *The position will contribute toward the development and implementation of computational approaches in the field of neuroscience, including Open Neuroscience Initiatives and mega-analyses of imaging and genomics datasets (i.e. resting-state and structural imaging data, Allen Brain Atlas etc.) with a goal of characterizing genomic and behavioral factors related to brain structure and functional network architecture. *Minimum Qualifications* Masters degree in computer science, neuroscience, psychology, data science, biomedical informatics, statistics/biostatistics or related fields and/or a record of substantive experience in neuroimaging methodology (e.g. fMRI task design and image analysis) or computational genomics (e.g. transcriptomics, statistical genetics), or both. Good communication (verbal and written), analytical, and organizational skills are required. Applicants should be proficient (or have prior exposure to and the ability to become proficient) in several or more of the following: Python, NiPy, bash scripting, Docker, Unix, Matlab, R, Javascript, Flask, MySQL, XNAT, pipeline development, big data processing, cloud and cluster computing). Applicants should send a CV, a brief statement describing applicant's qualifications for the position and relevant experience, and have at least 3 or more references. *Preferred Qualifications* Prior experience or an interest in web and/or GUI development for neuroscience applications is a plus. For more information and to apply, see: https://nyspi.applicantpro.com/jobs/686328-136864.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.macinnes at gmail.com Wed Dec 20 04:43:57 2017 From: jeff.macinnes at gmail.com (Jeff MacInnes) Date: Wed, 20 Dec 2017 01:43:57 -0800 Subject: [Neuroimaging] trouble building an affine from DICOM tags Message-ID: Hi all, I'm trying to construct an affine matrix using info from the DICOM tags in a set of anatomical slices, but not having much luck. I'm following along with the documentation at http://nipy.org/nibabel/dicom/dicom_orientation.html#dicom-affines-again (or at least think that I am), but the affine matrix I produce doesn't seem to reflect how the slices were actually collected. Specifically, I have 52 slices, collected sagittally, and by plotting individual slices, it looks like the row-index axis increases toward Inferior, the column-index axis increases toward Posterior, and the slice-axis increases toward the patient Left. However, when I build the affine, and submit it to nibabel.aff2axcodes() I get ('I', 'A', 'R'). I keep going in circles trying to determine why that's off. I put all of my steps in a notebook at https://github.com/jeffmacinnes/affineTests/blob/master/DICOM_affineTest.ipynb -- any insights would be greatly appreciated. More generally, I'm wondering how to use the affine matrix to reorder the voxel data before I save it as a Nifti file. Ideally, I'd like to be able to reorder and reorient the data so that the axes correspond to RAS (instead of IPL like the current data, but really I'm curious to know how to do this in a flexible manner for data of any orientation). However, if the voxels are reordered to RAS, does it require setting a new affine before saving? Many thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Wed Dec 20 17:57:17 2017 From: krzysztof.gorgolewski at gmail.com (Chris Gorgolewski) Date: Wed, 20 Dec 2017 14:57:17 -0800 Subject: [Neuroimaging] Postdoctoral position available in the Poldrack Lab Message-ID: Apologies for cross-posting. The Poldracklab at Stanford is looking to hire a postdoc with expertise in fMRI data analysis, starting as soon as possible. Our lab is a dynamic and collaborative group that includes basic cognitive neuroscience researchers focused on the study of decision making and executive function alongside methodological researchers focused on reproducible data analysis and data sharing. Stanford University is located in sunny northern California, deep in the heart of Silicon Valley and a short train ride to San Francisco. The postdoc will be able to contribute to a number of ongoing research projects in our lab and collaborating labs by analyzing fMRI data from those studies. In addition, the postdoc will have the opportunity to develop their own research program with the domain of scientific or methodological problems currently being investigated within the lab ( https://poldracklab.stanford.edu/projects). The successful candidate should: - Have a PhD in a relevant field of study and demonstrated experience with the processing and statistical analysis of fMRI data. - Have strong and demonstrated computer programming skills, including Python. Experience with Nipype is strongly preferred. Candidates will be asked to submit a link to their open source code repository for review. - Have strong skills in statistics and machine learning. - Have solid experience with computational data analysis, including strong UNIX/Linux skills and experience using high performance computing clusters. Experience with cloud computing platforms is also preferred. - Be dedicated to open source software development and data sharing. The Poldracklab is strongly committed to diversity in science, and we particularly welcome applications from members of traditionally underrepresented groups. Eligible candidates will be asked to complete a data analysis problem using a specified dataset and present a report of their results as well as sharing their analysis code, in order to demonstrate their expertise with data analysis. Interested candidates should email the following to russpold at stanford.edu: - A cover letter describing your background and research interests - CV (including a link to code repository, e.g. Github, Bitbucket) - Names and email addresses of two references -------------- next part -------------- An HTML attachment was scrubbed... URL: From hendr522 at umn.edu Wed Dec 20 19:03:58 2017 From: hendr522 at umn.edu (Timothy Hendrickson) Date: Wed, 20 Dec 2017 18:03:58 -0600 Subject: [Neuroimaging] Fwd: Reduce Control for Data Input Interfaces In-Reply-To: References: Message-ID: Timothy Hendrickson Department of Psychiatry University of Minnesota Bioinformatics and Computational Biology M.S. Candidate Office: 612-624-6441 Mobile: 507-259-3434 (texts okay) ---------- Forwarded message ---------- From: Timothy Hendrickson Date: Wed, Dec 20, 2017 at 5:17 PM Subject: Reduce Control for Data Input Interfaces To: nipy-devel at neuroimaging.scipy.org Hello, I have some data that I am attempting to run the HCP pipeline through. Since my data does not conform to the BIDS data directory format I have to run a modified version of the hcpre python package. Requirements for input into the HCP pipeline shell scripts are individual strings such as base directory, and subject number. Obviously I have had a hell of a time trying to find a data input interface which allows this functionality, since everyone that I have looked at (SelectFiles, DataGrabber) automatically build the entire path to the subject's directory. I suppose my inquiries are two-fold. 1) Is there an elegant way to split up output from the input interfaces so I can have just the base directory or just the subject number? 2) Is there a way in which I can verify that a particular subject (or file) exists within a base directory, but drop the base directory string leaving only the file name or subject number? Thanks! -Tim Timothy Hendrickson Department of Psychiatry University of Minnesota Bioinformatics and Computational Biology M.S. Candidate Office: 612-624-6441 <(612)%20624-6441> Mobile: 507-259-3434 <(507)%20259-3434> (texts okay) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryuvaraj at ntu.edu.sg Thu Dec 21 02:22:24 2017 From: ryuvaraj at ntu.edu.sg (Yuvaraj Rajamanickam (Dr)) Date: Thu, 21 Dec 2017 07:22:24 +0000 Subject: [Neuroimaging] Call for papers & tutorials: PRNI (Pattern Recognition in Neuroimaging) 2018 Message-ID: <3E9B0165C01BA047A1AFFBA5B9161C4121C48B06@EXCHMBOX34.staff.main.ntu.edu.sg> ******* please accept our apologies for cross-posting ******* -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- FIRST CALL FOR PAPERS AND TUTORIALS PRNI 2018: 8th International Workshop on Pattern Recognition in Neuroimaging to be held 12-14 June 2018 at the National University of Singapore, Singapore www.prni.org - @PRNIworkshop - www.facebook.com/PRNIworkshop/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The 8th International Workshop on Pattern Recognition in Neuroimaging (PRNI) will be held at the Centre for Life Sciences Auditorium, National University of Singapore, Singapore on June 12-14, 2018.Pattern recognition techniques are an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. Authors should prepare full papers with a maximum length of 4 pages (two column IEEE style) for double-blind review. The manuscript submission deadline is 04 April 2018, 11:59 pm SGT. Accepted manuscripts will be assigned either to an oral or poster sessions; all accepted manuscripts will be included in the workshop proceedings. Similarly to previous years, in addition to full length papers PRNI will also accept short abstracts (500 words excluding the title, abstract, tables, figure and data legends, and references) for poster presentation. Finally, this year PRNI has an open call for tutorial proposals. A tutorial can take a form of 2h, 4h or whole day event aimed at demonstrating a computational technique, software tool, or specific concept. Tutorial proposals featuring hands on demonstrations and promoting diversity (e.g. gender, background, institution) will be preferred. PRNI will cover conference registration fees for up to two tutors per accepted program. The submission deadline is also 04 April 2018, 11:59 pm SGT. Please see www.prni.org and follow @PRNIworkshop and www.facebook.com/PRNIworkshop/ for news and details. ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PRNI 2018 1st Call for Papers and Tutorials.pdf Type: application/pdf Size: 231707 bytes Desc: PRNI 2018 1st Call for Papers and Tutorials.pdf URL: From luke.bloy at gmail.com Fri Dec 22 18:51:58 2017 From: luke.bloy at gmail.com (Luke Bloy) Date: Fri, 22 Dec 2017 23:51:58 +0000 Subject: [Neuroimaging] Basic Nipype question Message-ID: Hi, I'm just getting starting with nipype and was wondering if individual nodes checked if they had already been successful run. Basically will a node reexecute if its outputs already exist? Thanks, Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.samoylovich.pellman at gmail.com Mon Dec 25 12:57:38 2017 From: john.samoylovich.pellman at gmail.com (John Pellman) Date: Mon, 25 Dec 2017 12:57:38 -0500 Subject: [Neuroimaging] Basic Nipype question In-Reply-To: References: Message-ID: In general yes nodes do check if they have been successfully run, but the accuracy of the checks depend upon how you've configured your nipype installation / the nipype config you're using. There are two methods that a node uses to determine if it has already been run- hashing (slower but more accurate) and timestamp (faster but much less accurate; easily confused if your pipeline's sink directory is copied from one disk volume to another without preserving timestamps). See: http://nipype.readthedocs.io/en/latest/users/config_file.html#execution (under 'hash_method') http://nipype.readthedocs.io/en/latest/users/debug.html (referenced in point 7) This isn't my line of work anymore so others in this group might be able to answer your questions more effectively in the future (I just saw this in my inbox and recognized it as a question I could answer based off previous experience). You might also want to check out https://neurostars.org/ as a venue for getting nipype-related questions answered. On Fri, Dec 22, 2017 at 6:51 PM, Luke Bloy wrote: > Hi, > > I'm just getting starting with nipype and was wondering if individual > nodes checked if they had already been successful run. > > Basically will a node reexecute if its outputs already exist? > > Thanks, > Luke > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Tue Dec 26 23:35:37 2017 From: pellman.john at gmail.com (John Pellman) Date: Tue, 26 Dec 2017 23:35:37 -0500 Subject: [Neuroimaging] Basic Nipype question In-Reply-To: References: Message-ID: In general yes nodes do check if they have been successfully run, but the accuracy of the checks depend upon how you've configured your nipype installation / the nipype config you're using. There are two methods that a node uses to determine if it has already been run- hashing (slower but more accurate) and timestamp (faster but much less accurate; easily confused if your pipeline's sink directory is copied from one disk volume to another without preserving timestamps). See: http://nipype.readthedocs.io/en/latest/users/config_file.html#execution (under 'hash_method') http://nipype.readthedocs.io/en/latest/users/debug.html (referenced in point 7) This isn't my line of work anymore so others in this group might be able to answer your questions more effectively in the future (I just saw this in my inbox and recognized it as a question I could answer based off previous experience). You might also want to check out https://neurostars.org/ as a venue for getting nipype-related questions answered. On Dec 22, 2017 6:52 PM, "Luke Bloy" wrote: > Hi, > > I'm just getting starting with nipype and was wondering if individual > nodes checked if they had already been successful run. > > Basically will a node reexecute if its outputs already exist? > > Thanks, > Luke > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Tue Dec 26 23:37:10 2017 From: pellman.john at gmail.com (John Pellman) Date: Tue, 26 Dec 2017 23:37:10 -0500 Subject: [Neuroimaging] Basic Nipype question In-Reply-To: References: Message-ID: Apologies for the duplicate message- I didn't notice that my initial e-mail for through and decided to resend with the account that belongs to this list. On Dec 26, 2017 11:35 PM, "John Pellman" wrote: > > In general yes nodes do check if they have been successfully run, but the > accuracy of the checks depend upon how you've configured your nipype > installation / the nipype config you're using. There are two methods that > a node uses to determine if it has already been run- hashing (slower but > more accurate) and timestamp (faster but much less accurate; easily > confused if your pipeline's sink directory is copied from one disk volume > to another without preserving timestamps). > > See: > http://nipype.readthedocs.io/en/latest/users/config_file.html#execution (under > 'hash_method') > http://nipype.readthedocs.io/en/latest/users/debug.html (referenced in > point 7) > > This isn't my line of work anymore so others in this group might be able > to answer your questions more effectively in the future (I just saw this in > my inbox and recognized it as a question I could answer based off previous > experience). You might also want to check out https://neurostars.org/ as > a venue for getting nipype-related questions answered. > > On Dec 22, 2017 6:52 PM, "Luke Bloy" wrote: > >> Hi, >> >> I'm just getting starting with nipype and was wondering if individual >> nodes checked if they had already been successful run. >> >> Basically will a node reexecute if its outputs already exist? >> >> Thanks, >> Luke >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajevia at gmail.com Thu Dec 28 23:11:21 2017 From: ajevia at gmail.com (Arnold Evia) Date: Thu, 28 Dec 2017 22:11:21 -0600 Subject: [Neuroimaging] [Dipy] Warnings when running free water model Message-ID: Hello, I am applying the free water dti model to my data, and receive the following warnings: /Users/arnoldevia/anaconda3/lib/python3.6/site-packages/scipy/optimize/minpack.py:427: RuntimeWarning: Number of calls to function has reached maxfev = 1800. warnings.warn(errors[info][0], RuntimeWarning) /Users/arnoldevia/personal_programs/dipy/dipy/reconst/fwdti.py:454: RuntimeWarning: overflow encountered in exp y = (1-f) * np.exp(np.dot(design_matrix, tensor[:7])) + \ The code I am using is essentially the same as the code used in the free water model documentation. What do these warnings mean, and what should I do to address them? Please let me know if additional information is needed. Thanks in advance! -Arnold Evia -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Fri Dec 29 23:11:13 2017 From: arokem at gmail.com (Ariel Rokem) Date: Fri, 29 Dec 2017 20:11:13 -0800 Subject: [Neuroimaging] [Dipy] Warnings when running free water model In-Reply-To: References: Message-ID: Hi Arnold, Thanks for your email. On Thu, Dec 28, 2017 at 8:11 PM, Arnold Evia wrote: > Hello, > > I am applying the free water dti model to my data, and receive the > following warnings: > > /Users/arnoldevia/anaconda3/lib/python3.6/site-packages/scipy/optimize/minpack.py:427: > RuntimeWarning: Number of calls to function has reached maxfev = 1800. > warnings.warn(errors[info][0], RuntimeWarning) > This warning means that when fitting the model to the data, the software encountered a voxel where the data looks very different from what is expected for this model, and the optimization required to find the model parameters terminated before it could find good values for the parameters, because it is set to stop after trying that many times. This is not usually something to worry about -- unless it happens with every single one of your voxels, and will arise in boundaries of the brain with other tissue (e.g., skull or dura), where the signal looks nothing like what you would expect for the model. I would look for extreme outliers in the model parameter values and examine where they occur (like you might look for negative eigenvalues, if you were to fit the DTI model). /Users/arnoldevia/personal_programs/dipy/dipy/reconst/fwdti.py:454: > RuntimeWarning: overflow encountered in exp > y = (1-f) * np.exp(np.dot(design_matrix, tensor[:7])) + \ > > I believe that this one occurs under similar conditions -- when the model parameters become very extreme, and the exponent goes to very high values, far beyond what you might expect in the physiological case. Again, I would look for extreme model parameter values in the results, and try to see whether they occur at tissue boundaries, or in non-brain/non-CSF tissue. Hope that helps! Ariel The code I am using is essentially the same as the code used in the free > water model documentation. > What do these warnings mean, and what should I do to address them? > Please let me know if additional information is needed. > > Thanks in advance! > -Arnold Evia > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: