Hi Adam, more responses inline below. =)
On Thu, Nov 21, 2013 at 5:04 AM, Adam Hughes
I tried this but to share a link, it asks for the emails of the recipients. You have used Dropbox to host a publicly accessible link? If so, I will certainly start doing this, thanks.
The web interface is a bit wonky that way, but it's certainly possible, it's called something like "copy share link". Then you get a URL that has a random token embedded in it, which anyone with the link can open. In OSX, you can just right-click > share dropbox link to have the link copied to your clipboard. That's how I got this: https://www.dropbox.com/s/ic04w1d0x98gnop/suspension%20bridge.jpg =) (You can add a "?dl=1" to the end to go straight to the file.) Yes, that is the goal. We had done a similar process ImageJ, but did
thersholding manually. I will read into the adaptive threshold a bit more. We had hoped that some of these corrections, such as histogram equilization, would make the automatic threshold more likely to give correct results.
adaptive is automatic, but adjusts the threshold individually for each pixel based on surrounding pixels. This should help e.g. for your image warped_f2_b1, where the background is not uniform and no single threshold may work for your entire image. Hmm I see. I will still try it out, but thanks for the heads up. I'll
feel better now if it doesn't work well.
Great strategy. =D We do have an a-prior knowledge actually. What I've been doing already is
putting a lower limit on particle size, with anything under it being noise. After doing particle counts and binning the data, we fit it with a guassian, and optionally scale the data so that the guassian is centered around the mean partitcle diameter (which believe we know to about 3nm based on TEM imaging and indirect spectroscopic techniques). Based on the size distribution, we try to further bin the data into small (dimers/trimers) and large aggregates. For all the particles that are large enough to be considered an aggregate, we *assume *that they fill a half-sphere volume, and then we infer the true particle due to these aggregates. It's pretty ad-hoc, but we certainly apply some knowledge of the expected particle size distributions. I realize watershedding won't split up huge clumps, but maybe could assist in the dimers and trimers? In any case, even if it doesn't significantly enhance our results, it would still be helpful to explore that option and I'll try it out.
That's the strategy I would suggest, but my point is that in some images, such as your last one, you have more of a carpet of particles, and no single particle will separate, so you will need knowledge from different images. If you have that, no problem! =) If you want to publish the result of your explorations as an IPython notebook, we won't stop you. =D Juan.