0

I am using Python 3.5 and Open CV to stitch images. The images are 1920 X 1080 in dimensions. I have so far used SIFT and ORB and any signs of stitching can be seen only in SIFT and in ORB I either get the trained image or a very distorted image..

For feature matching , I used this Homography Open CV

The actual Image Stitching is implemented following this book OpenCV with Python by Example. It seems to be the standard way of stitching images using python and opencv as seen across various posts here and online.

This is the code to stitch images

def warpImages(train_img , query_img , H):

  # This function waprs query image to train image

  t_row , t_col = train_img.shape[: 2]
  q_row , q_col = query_img.shape[: 2]

  list_pts1 = np.float32([[0 , 0] , [0 , t_row] , [t_col , t_row] , [t_col , 0]]).reshape(-1 , 1 , 2)
  temp_points = np.float32([[0 , 0] , [0 , q_row] , [q_col , q_row] , [q_col , 0]]).reshape(-1 , 1 , 2)

  list_pts2 = cv2.perspectiveTransform(temp_points , H)

  list_pts = np.concatenate((list_pts1 , list_pts2) , axis = 0)   

  [x_min , y_min] = np.int32(list_pts.min(axis = 0).ravel() - 0.5)
  [x_max , y_max] = np.int32(list_pts.max(axis = 0).ravel() + 0.5)

  print(x_max , x_min)
  print(y_max , y_min)

  translation_dist = [-x_min , -y_min]
  H_translation = np.array([[1 , 0 , translation_dist[0]] , [0 , 1 , translation_dist[1]] , [0 , 0 , 1]])

  output_img = cv2.warpPerspective(query_img , H_translation.dot(H) , (x_max - x_min , y_max - y_min))

  output_img[translation_dist[1] : t_row + translation_dist[1] , translation_dist[0] : translation_dist[0] + t_col ] = train_img

  return output_img

This is to find the Homography Matrix and Features.

def getHomoGraphMatrix(q_rgb , t_rgb , MIN_MATCH_COUNT):

  # Detect key points and descriptors with SIFT
  sift = cv2.xfeatures2d.SIFT_create()
  #orb = cv2.ORB_create()

  kp1 , des1 = sift.detectAndCompute(q_rgb , None) # query image
  kp2 , des2 = sift.detectAndCompute(t_rgb , None) # train image

  #FLANN_IDX_KDTREE = 0
  #idx_params = dict(algorithm = FLANN_IDX_KDTREE , trees = 5)
  #search_params = dict(checks = 50)

  #flann = cv2.FlannBasedMatcher(idx_params , search_params)

  bf = cv2.BFMatcher(cv2.NORM_L2 , crossCheck = False)

  # Match descriptors.
  bf_matches = bf.match(des1,des2)

  # Sort them in the order of their distance.
  #bf_matches = sorted(bf_matches, key = lambda x:x.distance)

  #matches = flann.knnMatch(des1 , des2 , k = 2)
  matches = bf.knnMatch(des1 , des2 , k = 2)

  #print(len(matches))

  # Store all good matchs passing lowes ratio test
  good = []


  for m , n in matches:
        #print(m.distance , n.distance , 0.7 * n.distance)
        if m.distance < 0.8 * n.distance:
              good.append(m)

  print(len(good))

  if len(good) > MIN_MATCH_COUNT:

        src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1 , 1 , 2)
        dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1 , 1 , 2)


        H , mask = cv2.findHomography(src_pts , dst_pts , cv2.RANSAC , 5.0)
        matchesMask = mask.ravel().tolist()

        h , w = q_rgb.shape[0] , q_rgb.shape[1]

        pts = np.float32([[0 , 0] , [0 , h - 1] , [w - 1 , h - 1] , [w - 1 , 0]]).reshape(-1 , 1 , 2)

        dst = cv2.perspectiveTransform(pts , H)

        t_rgb_t = cv2.polylines(t_rgb.copy() , [np.int32(dst)] , True , 255 , 3 , cv2.LINE_AA)

        return (good , H , mask)

  else:
        print('Not enough matches were found: ' , len(good))
        matchesMask = None
        #return None , None , None


  draw_params = dict(matchColor = (0 , 255 , 0),
                     singlePointColor = None,
                     matchesMask = matchesMask,
                     flags = 2)

  img3 = cv2.drawMatches(q_rgb.copy() , kp1 , t_rgb_t , kp2 , good , None , **draw_params)
  cv2.imwrite('res.jpg' , img3)

As seen from the above code, I have commented the ORB and FLANN matcher since the result image is not stitched.'SIFT` gives me a less accurate result and also takes up time to process which is understandable.

These are the images used: Query Image: Query Image

Train Image Train Image

Result Image using SIFT Result Image

Result using ORB enter image description here

It can be seen that ORB gives a wrong result and in SIFT, it looks like a repeated pattern. ORB performs faster. I have referred the following posts here: Display full Image using OpenCV (Image Stitching) and here Stitching Different sizes which points out why FLANN is not good for Image Stitching.

I am new to Image Stitching and would really like some pointers on how to approach and tackle this.

Update

Adding Matching Features plot enter image description here

Wilbus Fugu
  • 83
  • 2
  • 9
  • Have you drawn the matches to see the correct and incorrect matches? The first one actually doesn't look that far off. – alkasm Nov 27 '17 at 04:42
  • This is my result, https://i.stack.imgur.com/5iNmP.jpg. Not that bad. And I think, maybe you mixed up `train_img` and `query_img` . – Kinght 金 Nov 27 '17 at 05:58
  • @Alexander Reynolds: I am adding the matching features plotted between the two images. I see a lot of matches for the shadowed profile regions of the grass and no match for the camera shadows – Wilbus Fugu Nov 27 '17 at 05:58
  • @Silencer: The `SIFT` result looks similar to mine. But the `ORB` result is so much better than mine. Could you tell me what adjustments were made as I am not able to ferret out much from mine? – Wilbus Fugu Nov 27 '17 at 06:02
  • The repeated shadow at the bottom left is what is throwing me off, and the bottom tip of the grass plot is not stitched, wit just the sand and the tracks on the right added – Wilbus Fugu Nov 27 '17 at 06:05
  • I just resize the image to `fx=fy=0.3`, because it's too large for my PC. Other codes are almost the same. Take care of your images order. – Kinght 金 Nov 27 '17 at 06:06
  • @Silencer: Thanks, never thought scaling would improve the accuracy, but how did you apply the Homography matrix to the original image? – Wilbus Fugu Nov 27 '17 at 06:10
  • Improving the accuracy, it maybe just the sub-effect. If you want to use the resized matches to apply for original, then maybe you should transform the `(x,y)` of keypoints from `resized` to `original`, `(x/fx, y/fy)`. But I just resize them for testing, not for applying... – Kinght 金 Nov 27 '17 at 06:31

0 Answers0