@Martin, thanks for the ping. I don’t know about other devs but I’m easier to reach here, for sure. =) I added a comment to SO. Having said that I think Stéfan is more experienced with RANSAC. (My experience ends at having attended Stéfan’s tutorial on the topic. =P) But, can you confirm that the fundamental matrix is also varying between runs of skimage?
Generally, I’m concerned about whether the parameters are really the same. I couldn’t find an API reference for cv2 so I couldn’t check for differences. Can you point me to how you set up the cv2 ransac parameters?
Thanks,
Juan.
Hello,
I'm having trouble achieving robust performance with
`skimage.measure.ransac` when estimating fundamental matrix for a pair
of images.
I'm seeing highly varying results with different random seeds when
compared to OpenCV's `findFundamentalMatrix`.
I'm running both skimage's and opencv's ransac on the same sets of
keypoints and with (what I'm assuming are) equivalent parameters.
I'm using the same image pair as OpenCV python tutorials
(https://github.com/abidrahmank/OpenCV2-Python-Tutorials/tree/master/data).
Here's my demonstration script:
import cv2
import numpy as np
from skimage import io
from skimage.measure import ransac
from skimage.feature import ORB, match_descriptors
from skimage.transform import FundamentalMatrixTransform
orb = ORB(n_keypoints=500)
img1 = io.imread('images/right.jpg', as_grey=True)
orb.detect_and_extract(img1)
kp1 = orb.keypoints
desc1 = orb.descriptors
img2 = io.imread('images/left.jpg', as_grey=True)
orb.detect_and_extract(img2)
kp2 = orb.keypoints
desc2 = orb.descriptors
matches = match_descriptors(desc1, desc2, metric='hamming',
cross_check=True)
kp1 = kp1[matches[:, 0]]
kp2 = kp2[matches[:, 1]]
n_iter = 10
skimage_inliers = np.empty((n_iter, len(matches)))
opencv_inliers = skimage_inliers.copy()
for i in range(n_iter):
fmat, inliers = ransac((kp1, kp2), FundamentalMatrixTransform,
min_samples=8, residual_threshold=3,
max_trials=5000, stop_probability=0.99,
random_state=i)
skimage_inliers[i, :] = inliers
cv2.setRNGSeed(i)
fmat, inliers = cv2.findFundamentalMat(kp1, kp2,
method=cv2.FM_RANSAC,
param1=3, param2=0.99)
opencv_inliers[i, :] = (inliers.ravel() == 1)
skimage_sum_of_vars = np.sum(np.var(skimage_inliers, axis=0))
opencv_sum_of_vars = np.sum(np.var(opencv_inliers, axis=0))
print(f'Scikit-Image sum of inlier variances:
{skimage_sum_of_vars:>8.3f}')
print(f'OpenCV sum of inlier variances: {opencv_sum_of_vars:>8.3f}')
And the output:
Scikit-Image sum of inlier variances: 13.240
OpenCV sum of inlier variances: 0.000
I use the sum of variances of inliers obtained from different random
seeds as the metric of robustness.
I would expect this number to be very close to zero, because truly
robust ransac should converge to the same model independently of it's
random initialization.
How can I make skimage's `ransac` behave as robustly as opencv's?
Any other tips on this subject would be greatly appreciated.
Best regards,
Martin
(I originally posted this question on stackoverflow, but I'm not getting
much traction there, so I figured I'd try the mailing list.)
https://stackoverflow.com/questions/49342469/robust-epipolar-geometry-estimation-with-scikit-images-ransac
_______________________________________________
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image