Hello. I'm developing a video extensometer based on the identification of center of mass of circular white mark on a black rubber specimen. In order to deal with data in real time I have to be fast (over 100 fps). So I first identify the Zones Of Interests using this example : http://scikit-image.org/docs/dev/auto_examples/plot_label.html Then I compute the center of mass on each ZOI. As I have a fast camera the ZOI between two successive images doesn't change much. So if I extend the bounding box of my current ZOI I could be pretty confident in the fact that given circular mark in the next picture will be in the extended ZOI and the recompute an updated extened ZOI for the next image. So this is the big picture. You will find bellow the function I use in order to get it : def barycenter(image_,minx_,miny_,maxx_,maxy_,thresh_,border_): bw_=image_[minx_:maxx_+1,miny_:maxy_+1]>thresh_ [Y,X]=np.meshgrid(range(miny_,maxy_+1),range(minx_,maxx_+1)) region=regionprops(bw_) minx,miny,maxx,maxy=region[0].bbox Px_=(X*bw_).sum().astype(float)/bw_.sum() Py_=(Y*bw_).sum().astype(float)/bw_.sum() minx_=X[minx,miny]-border_ miny_=Y[minx,miny]-border_ maxx_=X[maxx,maxy]+border_ maxy_=Y[maxx,maxy]+border_ return Px_,Py_,minx_,miny_,maxx_,maxy_ As you can see I don't use region[0].centroid. I compute the moment myself if I time my function on a 141x108 ZOI I get 504 µs If I time this function : def barycenter2(image_,minx_,miny_,maxx_,maxy_,thresh_,border_): bw_=image_[minx_:maxx_+1,miny_:maxy_+1]>thresh_ [Y,X]=np.meshgrid(range(miny_,maxy_+1),range(minx_,maxx_+1)) region=regionprops(bw_) Px_,Py_=region[0].centroid Px_+=minx_ Py_+=miny_ minx,miny,maxx,maxy=region[0].bbox minx_=X[minx,miny]-border_ miny_=Y[minx,miny]-border_ maxx_=X[maxx,maxy]+border_ maxy_=Y[maxx,maxy]+border_ return Px_,Py_,minx_,miny_,maxx_,maxy_ I get 10ms per loop ! What is really strange is if I time : %timeit region[0].centroid I get 58.6 ns per loop ! So I don't really understand why this time explose when I use it in a function ? If someone have some insight it will be very helpfull. Even If I can use my first function, it's a pity to have to use less optimized functions. Best regards.
Hi Jeff, Firstly, what's with all the trailing underscores? Makes my brain hurt. =) Second, this is *somewhat* of a known issue. See: https://github.com/scikit-image/scikit-image/issues/1092 (Including the notebook link from that issue.) As you can see, PR 1096 <https://github.com/scikit-image/scikit-image/pull/1096> made some improvements, but I suspect not enough to solve your problem. Are you on master or on 0.10? Additionally, regionprops works through a "cached-property" pattern, which means that each value is computed once, and then stored for later retrieval. So your second region[0] call is probably hitting the cached value, hence the massive speedup! As to the specific problem of why your calculation is so much faster, my guess right now is that it's because of Python function call overhead: while you are computing everything directly, have a look at the regionprops code <https://github.com/scikit-image/scikit-image/blob/master/skimage/measure/_regionprops.py>: first, you have to go through the cached-property pattern (1 call), check whether the cache is active (2 calls), check whether it's been computed before (3 calls), decide to compute it (4 calls), compute the bbox (another travel through cached-property), then compute the "local" centroid (relative to current bbox), within that compute the moments (another cached-property), and *finally* compute the actual centroid. We're not doing any computations differently, but that is a *heck* of a lot of overhead for such a simple computation. I'd never followed this full path before, so thanks for pointing it out! A PR to improve this situation would be most welcome! (Bonus points for improving 3D support in the process.) Probably not quite the quick fix you were hoping for, but I hope this helps nonetheless! Juan. On Sat, Nov 15, 2014 at 2:27 AM, jeff witz <witzjean@gmail.com> wrote:
Hello.
I'm developing a video extensometer based on the identification of center of mass of circular white mark on a black rubber specimen.
In order to deal with data in real time I have to be fast (over 100 fps). So I first identify the Zones Of Interests using this example : http://scikit-image.org/docs/dev/auto_examples/plot_label.html
Then I compute the center of mass on each ZOI.
As I have a fast camera the ZOI between two successive images doesn't change much. So if I extend the bounding box of my current ZOI I could be pretty confident in the fact that given circular mark in the next picture will be in the extended ZOI and the recompute an updated extened ZOI for the next image.
So this is the big picture.
You will find bellow the function I use in order to get it : def barycenter(image_,minx_,miny_,maxx_,maxy_,thresh_,border_): bw_=image_[minx_:maxx_+1,miny_:maxy_+1]>thresh_ [Y,X]=np.meshgrid(range(miny_,maxy_+1),range(minx_,maxx_+1)) region=regionprops(bw_) minx,miny,maxx,maxy=region[0].bbox Px_=(X*bw_).sum().astype(float)/bw_.sum() Py_=(Y*bw_).sum().astype(float)/bw_.sum() minx_=X[minx,miny]-border_ miny_=Y[minx,miny]-border_ maxx_=X[maxx,maxy]+border_ maxy_=Y[maxx,maxy]+border_ return Px_,Py_,minx_,miny_,maxx_,maxy_
As you can see I don't use region[0].centroid. I compute the moment myself
if I time my function on a 141x108 ZOI I get 504 µs
If I time this function :
def barycenter2(image_,minx_,miny_,maxx_,maxy_,thresh_,border_): bw_=image_[minx_:maxx_+1,miny_:maxy_+1]>thresh_ [Y,X]=np.meshgrid(range(miny_,maxy_+1),range(minx_,maxx_+1)) region=regionprops(bw_) Px_,Py_=region[0].centroid Px_+=minx_ Py_+=miny_ minx,miny,maxx,maxy=region[0].bbox minx_=X[minx,miny]-border_ miny_=Y[minx,miny]-border_ maxx_=X[maxx,maxy]+border_ maxy_=Y[maxx,maxy]+border_ return Px_,Py_,minx_,miny_,maxx_,maxy_
I get 10ms per loop !
What is really strange is if I time :
%timeit region[0].centroid I get 58.6 ns per loop !
So I don't really understand why this time explose when I use it in a function ?
If someone have some insight it will be very helpfull. Even If I can use my first function, it's a pity to have to use less optimized functions.
Best regards.
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Hi Jeff On 2014-11-14 17:27:42, jeff witz <witzjean@gmail.com> wrote:
In order to deal with data in real time I have to be fast (over 100 fps). So I first identify the Zones Of Interests using this example : http://scikit-image.org/docs/dev/auto_examples/plot_label.html
I'm afraid that for a 100 fps applications, you'll currently have to look at OpenCV. We'd love to get those kinds of execution times, but it's not easily achievable with our current stack. That said, we are working on improving regionprops calculations. Regards Stéfan
Hello, With the process I've explained before it works at least at 70 FPS, we still have room for a lot of improvement ! For example we will compute each mark on a specific process. Finally we don't use regionprops for the real-time purposes because regionprops can find several zones in a ZOI and we want to considere the choosen zone to be unique. We use numpy.where to find where white pixels are and numpy.min() and numpy.max() to find the bounding box instead of bbox from regionprops. bbox is a little faster than our numpy version but can find several zones. We notice that the a median filter on each ZOI increase stability. Once we get something clean I will send an example. We already use cv2 as we have implemented the camera grabber class in OpenCV (if someone need a complete ximea opencv class mail me), I could test and compare the speed. Regards Le mardi 18 novembre 2014 00:38:35 UTC+1, Stefan van der Walt a écrit :
Hi Jeff
On 2014-11-14 17:27:42, jeff witz <witz...@gmail.com <javascript:>> wrote:
In order to deal with data in real time I have to be fast (over 100 fps). So I first identify the Zones Of Interests using this example : http://scikit-image.org/docs/dev/auto_examples/plot_label.html
I'm afraid that for a 100 fps applications, you'll currently have to look at OpenCV. We'd love to get those kinds of execution times, but it's not easily achievable with our current stack. That said, we are working on improving regionprops calculations.
Regards Stéfan
@jeff Look at the skimage.measure.moments_* functions. Using those should be a lot faster than using numpy to compute the moments.
Hi, Thank you all for your answers. finally I use OpenCV to perform the real-time computation and still use skimage for the initialisation on my real code. I've joined a file that allows one to compare the computation we need to make. There is a basic numpy method, the OpenCV one and the Skimage regionprops one. I don't find relevant to include a naive python implementation. Please note that the values computed for visualization are for illustration only and doesn't represent a real experiment. I Hope it can help to benchmark the future improvement of regionprops. Best regards
Hi Jeff On 2014-11-26 16:28:24, jeff witz <witzjean@gmail.com> wrote:
Please note that the values computed for visualization are for illustration only and doesn't represent a real experiment.
Thank you very much for the script--it will come in handy. When I run it on my system, I see the following lovely error message: Traceback (most recent call last): File "/tmp/UnitTestExtenso.py", line 192, in <module> barycenter_opencv(image,int(minx[0]),int(miny[0]),int(maxx[0]),int(maxy[0]),thresh,border,True) File "/tmp/UnitTestExtenso.py", line 110, in barycenter_opencv miny_, minx_, h, w= cv2.boundingRect(bw) cv2.error: /build/buildd/opencv-2.4.9+dfsg/modules/imgproc/src/contours.cpp:1895: error: (-215) points.checkVector(2) >= 0 && (points.depth() == CV_32F || points.depth() == CV_32S) in function boundingRect Stéfan
Hi Jeff, I mentioned this before, but since you do not care about individual regions, there is no reason to burden yourself with the overhead of the regionprops function. Try this modified function: ``` def barycenter_skimage2(image,minx,miny,maxx,maxy,thresh,border,White_Mark): """ skimage computation of the barycenter (moment 1 of image) on ZOI using regionprops """ bw=image[minx:maxx+1,miny:maxy+1]>thresh if(White_Mark==False): bw=1-bw Onex,Oney=np.where(bw==1) minx_=Onex.min() maxx_=Onex.max() miny_=Oney.min() maxy_=Oney.max() M = measure.moments(bw.astype(np.double), order=1) Px=M[0, 1]/M[0, 0] Py=M[1, 0]/M[0, 0] Px+=minx Py+=miny # Determination of the new bounding box using global coordinates and the margin minx=minx-border+minx_ miny=miny-border+miny_ maxx=minx+border+maxx_ maxy=miny+border+maxy_ return Px,Py,minx,miny,maxx,maxy ``` We could add a templated version of the moments functions for uint8 to make this a fairer comparison. The regionprops function would also gain from this, since the `double` version is only used for the weighted moments. Best, Johannes
On Nov 26, 2014, at 9:28 AM, jeff witz <witzjean@gmail.com> wrote:
Hi,
Thank you all for your answers. finally I use OpenCV to perform the real-time computation and still use skimage for the initialisation on my real code.
I've joined a file that allows one to compare the computation we need to make.
There is a basic numpy method, the OpenCV one and the Skimage regionprops one. I don't find relevant to include a naive python implementation.
Please note that the values computed for visualization are for illustration only and doesn't represent a real experiment.
I Hope it can help to benchmark the future improvement of regionprops.
Best regards
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout. <UnitTestExtenso.py>
@Jeff, See https://github.com/scikit-image/scikit-image/pull/1239
On Nov 26, 2014, at 10:52 AM, Johannes Schoenberger <jsch@demuc.de> wrote:
Hi Jeff,
I mentioned this before, but since you do not care about individual regions, there is no reason to burden yourself with the overhead of the regionprops function.
Try this modified function:
``` def barycenter_skimage2(image,minx,miny,maxx,maxy,thresh,border,White_Mark): """ skimage computation of the barycenter (moment 1 of image) on ZOI using regionprops """ bw=image[minx:maxx+1,miny:maxy+1]>thresh if(White_Mark==False): bw=1-bw Onex,Oney=np.where(bw==1) minx_=Onex.min() maxx_=Onex.max() miny_=Oney.min() maxy_=Oney.max() M = measure.moments(bw.astype(np.double), order=1) Px=M[0, 1]/M[0, 0] Py=M[1, 0]/M[0, 0] Px+=minx Py+=miny # Determination of the new bounding box using global coordinates and the margin minx=minx-border+minx_ miny=miny-border+miny_ maxx=minx+border+maxx_ maxy=miny+border+maxy_ return Px,Py,minx,miny,maxx,maxy ```
We could add a templated version of the moments functions for uint8 to make this a fairer comparison. The regionprops function would also gain from this, since the `double` version is only used for the weighted moments.
Best, Johannes
On Nov 26, 2014, at 9:28 AM, jeff witz <witzjean@gmail.com> wrote:
Hi,
Thank you all for your answers. finally I use OpenCV to perform the real-time computation and still use skimage for the initialisation on my real code.
I've joined a file that allows one to compare the computation we need to make.
There is a basic numpy method, the OpenCV one and the Skimage regionprops one. I don't find relevant to include a naive python implementation.
Please note that the values computed for visualization are for illustration only and doesn't represent a real experiment.
I Hope it can help to benchmark the future improvement of regionprops.
Best regards
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout. <UnitTestExtenso.py>
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
participants (4)
-
jeff witz
-
Johannes Schoenberger
-
Juan Nunez-Iglesias
-
Stefan van der Walt