Well I guess, for a slight performance improvement, you could create your own streamlined histogrammer.
But, in order to better grasp your situation it would be beneficial to know how the counts and bounds are used later on. Just wondering if this kind massive histogramming could be somehow avoided totally.
Indeed. Here's what I do. My images come from CCD, and as such, the zero level in the image is not the true zero level, but is the true zero + the background noise of each pixels. By doing the histogram, I plan on detecting what is the most common value per row. Once I have the most common value, I can derive the interval where most of the values are (the index of the largest occurence is easily obtained by sorting the counts, and I take a slice [index_max_count,index_max_count+1] in the second array given by the histogram). Then, I take the mean value of this interval and I assume it is the value of the bias for my row. I do this procedure both on the row and columns as a sanity check. And I know this procedure will not work if on any row/column there is a lot of signal and very little bias. I'll fix that afterwards ;-) Éric.
Regards, eat
Un clavier azerty en vaut deux ---------------------------------------------------------- Éric Depagne eric@depagne.org