[Neuroimaging] Analyzing the topology of ROIs and flood-filling in python (skimage?)

Jesus-Omar Ocegueda-Gonzalez jomaroceguedag at gmail.com
Thu Sep 3 04:39:24 CEST 2015


Actually, Ariel, nearest neighbor interpolation is a very unstable
operation. If you interpolate at x or x+epsilon you may get different
results for a very small epsilon, and discarding one single voxel may lead
to a rejection of a large number of streamlines (I'm thinking about the
boundary of the ROI too!, not only the "hole" ). I think it would be a more
precise selection if you warped the streamlines to the template and select
them there (now I see that we need that extension to the diffeomorphic map
asap!).

On Wed, Sep 2, 2015 at 9:19 PM, Jesus-Omar Ocegueda-Gonzalez <
jomaroceguedag at gmail.com> wrote:

> Thanks Ariel, and don't worry, this is very related to the work I'm doing
> now, so this is actually very useful. I almost reproduced your experiment,
> by any chance can you share: LOCC_ni, ROCC_ni and midsag_ni?
>
> On Wed, Sep 2, 2015 at 8:29 PM, Ariel Rokem <arokem at gmail.com> wrote:
>
>> Hi Omar,
>>
>> Excellent - thanks so much for taking a look! I know that you are very
>> busy these days, and so your attention on this is highly appreciated! I
>> will try experimenting more with this, with different input parameters, as
>> you suggested.
>>
>> If you also want to take a look, since #680 and #681 were merged into
>> dipy, you can now run:
>>
>>     import dipy.data as dpd
>>     MNI_T2 = dpd.read_mni_template()
>>
>> To get the template data.
>>
>> Thanks again,
>>
>> Ariel
>>
>> On Wed, Sep 2, 2015 at 6:16 PM, Jesus-Omar Ocegueda-Gonzalez <
>> jomaroceguedag at gmail.com> wrote:
>>
>>> Hello guys!,
>>> I have been working on this issue for some days now (this is very
>>> interesting Ariel!, thanks for sharing your findings). Satra is totally
>>> right that **in theory** the transformations should preserve the topology.
>>> Unfortunately, the transformations are only **approximately**
>>> diffeomorphic. I am totally sure that this issue should be there in the
>>> original version of ants too (dipy's implementation is the same algorithm),
>>> although maybe the new version (antsRegistration) may have some
>>> improvements that I'm not aware of.
>>>
>>> Having said that, you can make the transforms closer to diffeomorphic by
>>> reducing the `step_length` parameter (in millimeters) from
>>> `SymmetricDiffeomorphicRegistration`, which by default is 0.25 mm. You may
>>> try something about 0.15 mm. The objective is to avoid making very
>>> "aggressive" iterations, so another way to achieve this is by increasing
>>> the smoothing parameter from the CCMetric, the parameter is `sigma_diff`,
>>> which by default is 2.0, you may try something bout 3.0 (I would first try
>>> reducing the step size, though).
>>>
>>> I would like to try some other ideas, by any chance can you share the
>>> data (MNI_T2)?
>>> Thank you very much!
>>> -Omar.
>>>
>>>
>>>
>>> On Wed, Sep 2, 2015 at 7:28 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>>
>>>> hi ariel,
>>>>
>>>> can you do nearest neighbor interpolation in
>>>> `mapping.inverse_transform`? if your original ROI doesn't have holes and
>>>> you are doing a diffeomorphic mapping, your target shouldn't have holes
>>>> either. for a comparison you could run antsRegister and
>>>> antsApplyTransforms, with nearest neighbor interpolation.
>>>>
>>>> cheers,
>>>>
>>>> satra
>>>>
>>>> On Wed, Sep 2, 2015 at 8:13 PM, Ariel Rokem <arokem at gmail.com> wrote:
>>>>
>>>>> Hi everyone,
>>>>>
>>>>> Jason and I are working on a port of his AFQ system (
>>>>> https://github.com/jyeatman/afq) into dipy. We've started sketching
>>>>> out some notebooks on how that might work here:
>>>>>
>>>>> https://github.com/arokem/AFQ-notebooks
>>>>>
>>>>> The main thrust of this is in this one:
>>>>>
>>>>>
>>>>> https://github.com/arokem/AFQ-notebooks/blob/master/AFQ-registration-callosum.ipynb
>>>>>
>>>>> The first step in this process is to take a standard ROI of some part
>>>>> of the brain (say, corpus callosum, which is where we are starting) and
>>>>> warp it into the subject's individual brain through a non-linear
>>>>> registration between the individual brain and the template brain on which
>>>>> the ROI was defined (in this case MNI152). Registration works phenomenally
>>>>> (see cell 17), but because this is a non-linear registration, we find
>>>>> ourselves with some holes in the ROI after the transformation (see cell 27
>>>>> for a sum-intensity projects). We are trying to use
>>>>> scipy.ndimage.binary_fill_holes to, well, fill these holes, but that
>>>>> doesn't seem to be working for us (cell 35 still has that hole...).
>>>>>
>>>>> Any ideas about what might be going wrong? Are we using fill_holes
>>>>> incorrectly? Any other tricks to do flood-filling in python? Should we be
>>>>> using skimage?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Ariel
>>>>>
>>>>> _______________________________________________
>>>>> Neuroimaging mailing list
>>>>> Neuroimaging at python.org
>>>>> https://mail.python.org/mailman/listinfo/neuroimaging
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Neuroimaging mailing list
>>>> Neuroimaging at python.org
>>>> https://mail.python.org/mailman/listinfo/neuroimaging
>>>>
>>>>
>>>
>>>
>>> --
>>> "Cada quien es dueño de lo que calla y esclavo de lo que dice"
>>> -Proverbio chino.
>>> "We all are owners of what we keep silent and slaves of what we say"
>>> -Chinese proverb.
>>>
>>> http://www.cimat.mx/~omar
>>>
>>> _______________________________________________
>>> Neuroimaging mailing list
>>> Neuroimaging at python.org
>>> https://mail.python.org/mailman/listinfo/neuroimaging
>>>
>>>
>>
>> _______________________________________________
>> Neuroimaging mailing list
>> Neuroimaging at python.org
>> https://mail.python.org/mailman/listinfo/neuroimaging
>>
>>
>
>
> --
> "Cada quien es dueño de lo que calla y esclavo de lo que dice"
> -Proverbio chino.
> "We all are owners of what we keep silent and slaves of what we say"
> -Chinese proverb.
>
> http://www.cimat.mx/~omar
>



-- 
"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/neuroimaging/attachments/20150902/6c25c0df/attachment.html>


More information about the Neuroimaging mailing list