[Neuroimaging] nlmeans in HCP data

Samuel St-Jean stjeansam at gmail.com
Fri Feb 5 11:13:47 EST 2016


To partly answer the question, you should pick N=1 as the HCP data is using
a SENSE1 reconstruction, and thus always give a rician distribution [1].
As for using estimate sigma, it tends to overblur stuff for higher
b-value/spatially varying noise (it has a hard time on our philips 3T data
for example, edges are overblurred and center is untouched).

Regarding these shortcomings, I linked to some ideas to solve some of these
caveats in the gsoc discussion thread though.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/

2016-02-05 0:58 GMT+01:00 Ariel Rokem <arokem at gmail.com>:

> Hi everyone,
>
> does anyone use the Dipy nlmeans with HCP diffusion data? Is that a good
> idea? What do you use to estimate the sigma input? If you use
> dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword
> argument for these data? Since the preprocessed data has gone through some
> heavy preprocessing, I am not sure whether assuming that 32 (the number of
> channels in these machines, if I understand correctly) is a good number is
> reasonable.
>
> Thanks!
>
> Ariel
>
> _______________________________________________
> Neuroimaging mailing list
> Neuroimaging at python.org
> https://mail.python.org/mailman/listinfo/neuroimaging
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/neuroimaging/attachments/20160205/873603d3/attachment.html>


More information about the Neuroimaging mailing list