[Neuroimaging] nlmeans in HCP data

Ariel Rokem arokem at gmail.com
Sat Feb 6 13:57:35 EST 2016


Thanks for your answer:

On Sat, Feb 6, 2016 at 3:03 AM, Samuel St-Jean <stjeansam at gmail.com> wrote:

> For starters, if you have motion between b0s volumes or a few of them, you
> might have problems and induce a larger variance because of that, but I
> guess if it works why not. As for a single voxel estimate, it might be
> unstable due to the small number of samples, but taking moving neighborhood
> could help. Actually they use it fr estimating mtion and pulsation artefact
> if I recall correctly [1]
>
>
I think that one practical thing would be to create the 3D map of the b0
noise, including the possibility for correction for small number of samples
(see the Matlab code I referred to). I think that it would be up to the
user to determine whether this map is useful, to smooth it spatially, or to
take one number (e.g. the median) out of it, and whether to ignore certain
parts of this image that are particularly susceptible to the motion issues
(e.g. edges of the brain, interface between white matter and ventricles). I
can go ahead and make a PR with that, and we can continue the discussion
there, but it might take me a few days to get that up.


> As for evaluating, predicting signal or not is one of the aspect you can
> look at from my opinion, but with all the local model fitting and
> tractography happening afterward, looking at a squared error value is not
> very informative, especially if it averaged over all the volume. Since a
> large error in a crossing voxel could be much worse than small errors in
> single fiber voxels, it depends on what yu want to get at the end of the
> day. I can be useful to judge an optimization scheme, but beyond that I
> don't feel like it reflect properties of the end goal.
>
> [1] https://www.ncbi.nlm.nih.gov/pubmed/21469191
>
> Le 2016-02-06 03:44, Ariel Rokem a écrit :
>
> Thanks for the answer. I actually hadn't read the GSoC thread before
> sending this question - just read that too.Â
>
> This might be a naive question: what do you think about estimating the
> noise in each voxel from the variance in the b0s image?Â
>
> When we noticed that the GE scanner at Stanford was masking out the
> background, we switched the implementation of RESTORE on vistasoft to use
> the variance between multiple b0 images as an estimate of the noise,
> including a correction for bias due to small sample:Â
>
>
> https://github.com/vistalab/vistasoft/blob/master/mrDiffusion/utils/dtiComputeImageNoise.m#L58
>
> In this case, we take a median to have one number for the entire volume,
> but we could also just keep the variance in each voxel. Do you see any
> obvious problems with that?
>
> From my point of view, it is rather straightforward to quantitatively
> evaluate whether a denoising method is improving your analysis. Either your
> model of the diffusion data fits the data better (in the cross-validation
> sense) following denoising, or it doesn't, in which case the method's
> probably no good.
>
>
> On Fri, Feb 5, 2016 at 8:13 AM, Samuel St-Jean <stjeansam at gmail.com>
> wrote:
>
>> To partly answer the question, you should pick N=1 as the HCP data is
>> using a SENSE1 reconstruction, and thus always give a rician distribution
>> [1].
>> As for using estimate sigma, it tends to overblur stuff for higher
>> b-value/spatially varying noise (it has a hard time on our philips 3T data
>> for example, edges are overblurred and center is untouched).
>>
>> Regarding these shortcomings, I linked to some ideas to solve some of
>> these caveats in the gsoc discussion thread though.
>>
>> [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/
>>
>> 2016-02-05 0:58 GMT+01:00 Ariel Rokem <arokem at gmail.com>:
>>
>>> Hi everyone,Â
>>>
>>> does anyone use the Dipy nlmeans with HCP diffusion data? Is that a good
>>> idea? What do you use to estimate the sigma input? If you use
>>> dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword
>>> argument for these data? Since the preprocessed data has gone through some
>>> heavy preprocessing, I am not sure whether assuming that 32 (the number of
>>> channels in these machines, if I understand correctly) is a good number is
>>> reasonable.Â
>>>
>>> Thanks!Â
>>>
>>> Ariel
>>>
>>> _______________________________________________
>>> Neuroimaging mailing list
>>> Neuroimaging at python.org
>>> https://mail.python.org/mailman/listinfo/neuroimaging
>>>
>>>
>>
>> _______________________________________________
>> Neuroimaging mailing list
>> Neuroimaging at python.org
>> https://mail.python.org/mailman/listinfo/neuroimaging
>>
>>
>
>
> _______________________________________________
> Neuroimaging mailing listNeuroimaging at python.orghttps://mail.python.org/mailman/listinfo/neuroimaging
>
>
>
> _______________________________________________
> Neuroimaging mailing list
> Neuroimaging at python.org
> https://mail.python.org/mailman/listinfo/neuroimaging
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/neuroimaging/attachments/20160206/40dde43c/attachment.html>


More information about the Neuroimaging mailing list