12

I have stitched two images together using OpenCV functions and C++. Now I am facing a problem that the final image contains a large black part.

The final image should be a rectangle containing the effective part. My image is the following:

enter image description here

How can I remove the black section?

BenMorel
  • 30,280
  • 40
  • 163
  • 285
deepak_k
  • 181
  • 1
  • 2
  • 9
  • You could probably use cvFindContours to get the extents of your image content and then crop to that... – LiMuBei May 17 '12 at 09:02
  • 2
    while adding images in question, output the result using `imwrite` and add it. It is better than adding screenshot, because those who answer your question need to remove unwanted regions from the image. – Abid Rahman K May 17 '12 at 09:07
  • 4
    Your workspace is kind of tidy. – wengseng May 17 '12 at 09:30
  • 1
    How exactly would you like to remove the black part? Is it okay if the upper-left corner of the new image is the same as it is currently, and the lower-right corner is at the lowest part of the stitched image? Would you like to cut it so it contains minimal amount of black parts while still containing all the image-parts, or vice-versa, so it cuts out minimal amount of image part while removing all the black parts? Elaborate please :) – penelope May 17 '12 at 09:58

3 Answers3

22

mevatron's answer is one way where amount of black region is minimised while retaining full image.

Another option is removing complete black region where you also loose some part of image, but result will be a neat looking rectangular image. Below is the Python code.

Here, you find three main corners of the image as below:

enter image description here

I have marked those values. (1,x2), (x1,1), (x3,y3). It is based on the assumption that your image starts from (1,1).

Code :

First steps are same as mevatron's. Blur the image to remove noise, threshold the image, then find contours.

import cv2
import numpy as np

img = cv2.imread('office.jpg')
img = cv2.resize(img,(800,400))

gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,3)

ret,thresh = cv2.threshold(gray,1,255,0)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

Now find the biggest contour which is your image. It is to avoid noise in case if any (Most probably there won't be any). Or you can use mevatron's method.

max_area = -1
best_cnt = None

for cnt in contours:
    
    area = cv2.contourArea(cnt)
    if area > max_area:
        max_area = area
        best_cnt = cnt

Now approximate the contour to remove unnecessary points in contour values found, but it preserve all corner values.

approx = cv2.approxPolyDP(best_cnt,0.01*cv2.arcLength(best_cnt,True),True)

Now we find the corners.

First, we find (x3,y3). It is farthest point. So x3*y3 will be very large. So we find products of all pair of points and select the pair with maximum product.

far = approx[np.product(approx,2).argmax()][0]

Next (1,x2). It is the point where first element is one,then second element is maximum.

ymax = approx[approx[:,:,0]==1].max()

Next (x1,1). It is the point where second element is 1, then first element is maximum.

xmax = approx[approx[:,:,1]==1].max()

Now we find the minimum values in (far.x,xmax) and (far.y, ymax)

x = min(far[0],xmax)
y = min(far[1],ymax)

If you draw a rectangle with (1,1) and (x,y), you get result as below:

enter image description here

So you crop the image to correct rectangular area.

img2 = img[:y,:x].copy()

Below is the result:

enter image description here

See, the problem is that you lose some parts of the stitched image.

Community
  • 1
  • 1
Abid Rahman K
  • 48,289
  • 27
  • 140
  • 153
  • 1
    :Since i am more acquainted with C++ and tried to convert the code(in python) to C++ but was not able to succeed much.If you could please share the same code in C++ language,i would be highly thankful. – deepak_k May 27 '12 at 14:25
  • i am sorry, i am not good in c++. But if you tell me if you have any doubt about any step in my method, i can explain in much more simpler way. – Abid Rahman K May 28 '12 at 19:19
  • :I have done all the steps till approxpolyDP, But I dont know how to calculate the product of coordinate points in C and I think you have also used pseudo code.Can you tell me how can I calculate the product of coordinate points in C or give me any link where somebody have used your type of method in C.For reference my image after applying approxpoly() method is [http://i.imgur.com/Qfoyt.jpg?1] .Hope you also got the same output image after applying approxpolyDp(). – deepak_k May 30 '12 at 10:49
  • My code is a working code in opencv python api. Actually i get a list of 4 coordinates like `approx = [[a,b],[c,d],[e,f],[g,h],[i,j]]`. Now i take each set and multiply their values and store it in another array. Like, `for i in approx: k[0]=i[0]*i[1]`. Now take the argument of maximum value in resulting array. This i have done in a single step with help of numpy, a python library : `far = approx[np.product(approx,2).argmax()][0]` – Abid Rahman K May 30 '12 at 13:48
  • okay.But,I think in C ,we can not take coordinates like this.Can you tell me some other method .I have seen in SO that Hough transform is use to get lines,but I think that will increase the complexity unnecessarily.Do you know any other method? – deepak_k May 30 '12 at 14:12
  • I think a cvSeq object is returned as function of approxpolyDP function. You can extract the values from cvSeq using `h_next,h_prev` attributes. You can find them in documentation : http://opencv.itseez.com/modules/core/doc/dynamic_structures.html? highlight=cvseq#cvseq ,Also try `cvgetseqelem` function. http://opencv.itseez.com/modules/core/doc/dynamic_structures.html?highlight=cvseq#getseqelem – Abid Rahman K May 30 '12 at 14:23
  • Sir, h_next and h_prev are used to point the next contours in the contour tree/list .And In my image I have just one Contour.Anyways Thanks for your help,your reply really helped me a lot.Thank you – deepak_k May 30 '12 at 17:14
  • @AbidRahmanK Have you worked in Matlab? I am having trouble finding fucntions corresponding to `cv2.approxPolyDP()` and `cv2.arcLength()`. Your approach is exactly what I need but getting stuck at the functions. Please have a look at my problem here http://stackoverflow.com/questions/35170367/how-to-find-black-patch-in-an-image-in-matlab – Nancy Feb 03 '16 at 11:08
9

You can do this with threshold, findContours, and boundingRect.

So, here is a quick script doing this with the python interface.

stitched = cv2.imread('stitched.jpg', 0)
(_, mask) = cv2.threshold(stitched, 1.0, 255.0, cv2.THRESH_BINARY);

# findContours destroys input
temp = mask.copy()
(contours, _) = cv2.findContours(temp, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# sort contours by largest first (if there are more than one)
contours = sorted(contours, key=lambda contour:len(contour), reverse=True)
roi = cv2.boundingRect(contours[0])

# use the roi to select into the original 'stitched' image
stitched[roi[1]:roi[3], roi[0]:roi[2]]

Ends up looking like this: enter image description here

NOTE : Sorting may not be necessary with raw imagery, but using the compressed image caused some compression artifacts to show up when using a low threshold, so that is why I post-processed with sorting.

Hope that helps!

Community
  • 1
  • 1
mevatron
  • 13,381
  • 3
  • 51
  • 68
0

You can use active contours (balloons/snakes) for selecting the black region accurately. A demonstration can be found here. Active contours are available in OpenCV, check cvSnakeImage.

Hakan Serce
  • 10,888
  • 3
  • 26
  • 43