4

I am using the ORB algorithm of OpenCV 2.4.9 with Python to compare images. The ORB algorithm does not return the similarity score as a percentage. Is there any way to do this?

My code to compare images using ORB is as follows

img1 = cv2.imread("img11.jpg",0) 
img2 = cv2.imread("img2.jpg",0)
# Initiate ORB detector
orb = cv2.ORB()

# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)

matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)

good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])
if len(good) > 20:
   print "similar image"

I did find a solution on Stack Overflow to do this for sift algorithm using Matlab but is there any external library out there that can be easily used with Python to do that with OpenCV?

Window sky
  • 13
  • 2
user93
  • 1,776
  • 1
  • 23
  • 43
  • 1
    I've just stumbled upon the same problem. Strange thing is that [docs](https://docs.opencv.org/3.4.0/db/d95/classcv_1_1ORB.html#adc371099dc902a9674bd98936e79739c) say that `ORB_create` writes the score of features to `KeyPoint::score`, which is not true. Could you resolve the problem? – Georgy May 03 '18 at 16:00
  • Does https://stackoverflow.com/a/51728654/1021819 help? – jtlz2 Sep 19 '19 at 11:48

2 Answers2

3

I don't think keypoint matching lends itself to a percentage score, regardless of whether you use ORB or SIFT.

I think OP was referring to this post which does give hints on how to arrive at a score for each match. The score is the square of the distance of each item in the match pair i.e.

m.distance**2 + n.distance**2

where m and n are from the OP posted code. However, this score bears no resemblance to a percentage. And I'm not sure you're going to find one. The magic number of 0.75 in the OP code is known in some places as the Lowe ratio which was first proposed by Lowe in [D G Lowe, "Distinctive Image Features from Scale-Invariant Keypoints", Intl Journal of Computer Vision 60(2), 91-110, 2004]. It's as good a figure of merit as any, but needs to be adjusted according to the keypoint detection algorithm ( e.g. ORB, SIFT, etc). To determine whether you've found a good match, it's common to tweak the Lowe ratio and then count the number of good matches. The Homography tutorial (for OpenCV 2.4 or 3.4.1) is a good example of this

I'm using OpenCV 3.4 and ORB does return values, just not as many as SIFT. Using the tutorial images "box.png" and "box_in_scene.png", I get 79 "good" matches with SIFT and 7(!) "good" matches with ORB.

However, if I crank up the magic number 0.75 to 0.89 for ORB, I get 79 "good" matches.

Full code using Python 3.4.4 and OpenCV 3.4. Syntax and operation should be very similar for OpenCV 2.4.9:

# This time, we will use BFMatcher.knnMatch() to get k best matches. 
# In this example, we will take k=2 so that we can apply ratio test 
# explained by D.Lowe in his paper. 

import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img1 = cv.imread('box.png',0)          # queryImage
img2 = cv.imread('box_in_scene.png',0) # trainImage

method = 'ORB'  # 'SIFT'
lowe_ratio = 0.89

if method   == 'ORB':
    finder = cv.ORB_create()
elif method == 'SIFT':
    finder = cv.xfeatures2d.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = finder.detectAndCompute(img1,None)
kp2, des2 = finder.detectAndCompute(img2,None)

# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []

for m,n in matches:
    if m.distance < lowe_ratio*n.distance:
        good.append([m])

msg1 = 'using %s with lowe_ratio %.2f' % (method, lowe_ratio)
msg2 = 'there are %d good matches' % (len(good))

img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good, None, flags=2)

font = cv.FONT_HERSHEY_SIMPLEX
cv.putText(img3,msg1,(10, 250), font, 0.5,(255,255,255),1,cv.LINE_AA)
cv.putText(img3,msg2,(10, 270), font, 0.5,(255,255,255),1,cv.LINE_AA)
fname = 'output_%s_%.2f.png' % (method, magic_number)
cv.imwrite(fname, img3)

plt.imshow(img3),plt.show()

Using these images for input:

enter image description here enter image description here

I get these results: enter image description here enter image description here

However, it's worth noting that ORB gives many more bogus matches that are off the Bastoncini box.

bfris
  • 2,771
  • 1
  • 12
  • 25
  • Thanks for your input, but I'm not sure if this answers the question. The problem, as I understand, was in getting **scores** of matches, probably in order to range them later from best matches to worse ones (That's what I wanted to do). And your answer explains how to increase quantity of matches, which is not the same. – Georgy May 03 '18 at 20:48
  • @Georgy. Point well taken. I went down the rabbit hole of ORB vs SIFT but didn't address the original question of percentage. Answer is now updated. – bfris May 03 '18 at 23:36
  • 1
    Ok! So, distance and the score are the same! Finally I could sort the matches by `matches = sorted(matches, key=operator.attrgetter('distance'))`. (I was using `.match` instead of `.knnMatch`). Linking related post for people like me: [What does the distance attribute in DMatches mean?](https://stackoverflow.com/questions/16996800/what-does-the-distance-attribute-in-dmatches-mean) – Georgy May 04 '18 at 09:56
  • 1
    @Georgy or `matches = sorted(matches, key=lambda x: x.distance)` [removes dependence on `operator`] – jtlz2 Jul 10 '18 at 07:58
0

One answer, though probably not the best one, could be :

(...)
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []

for m,n in matches:
    if m.distance < lowe_ratio*n.distance:
        good.append([m])

dist = 1 - len(good) / (max(len(pic1.description), len(pic2.description)))

Then you get a % value, which is somewhat good to rank pictures. If you change max for min some pictures will be "attractor" if they don't have enough descriptors.

ZettaCircl
  • 585
  • 8
  • 12