0

I'm working on an Android background subtraction project, with a moving camera. I'm trying to use feature matching, findHomography and warpPerspective to find overlapping pixels between two frames. However, the output I get is slightly incorrect. I'm quite new to image processing, so I'm not familiar with all the terminology. I have 2 main issues:

1) The result of warpPerspective is overly distorted - e.g. the image is skewed, objects in the image are flipped, squished, etc. How do I solve this?

2) I sometimes get an 'OpenCV Error: Assertation failed' error, which crashes my app. This error maps to warpPerspective. Notes: the dimensions in image1 (previous frame) and image2 (current frame) are the same. I convert the images to gray before detecting features (currently from RGB). I was sometimes getting a similar 'OpenCV assertion failed' error with findHomography, but I learned it needs at least 4 points - so adding an if statement solved it, but not sure how to solve the error with warpPerspective.

The error I get:

02-24 15:30:49.554: E/cv::error()(4589): OpenCV Error: Assertion failed (type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U)) 
    in void cv::batchDistance(cv::InputArray, cv::InputArray, cv::OutputArray, int, cv::OutputArray, int, int, cv::InputArray, int, bool), 
    file /home/reports/ci/slave_desktop/50-SDK/opencv/modules/core/src/stat.cpp, line 2473

My code:

void stitchFrames(){

    //convert frames to grayscale
    image1 = prevFrame.clone();
    image2 = currFrame.clone();

    if(colourSpace==1){ //convert from RGB to gray
        cv::cvtColor(image1, image1Gray,CV_RGB2GRAY);
        cv::cvtColor(image2, image2Gray,CV_RGB2GRAY);
    }
    else if(colourSpace==2){ //convert from HSV to gray
        cv::cvtColor(image1, image1Gray,CV_HSV2RGB);
        cv::cvtColor(image1Gray,image1Gray,CV_RGB2GRAY);
        cv::cvtColor(image2, image1Gray,CV_HSV2RGB);
        cv::cvtColor(image2Gray,image1Gray,CV_RGB2GRAY);
    }

    else if(colourSpace==3){ //no need for conversion
        image1Gray = image1;
        image2Gray = image2;
    }

    //----FEATURE DETECTION----

    //key points
    std::vector<KeyPoint> keypoints1, keypoints2;

    int minHessian;

    cv::FastFeatureDetector detector;

    detector.detect(image1Gray,keypoints1); //prevFrame
    detector.detect(image2Gray,keypoints2); //currFrame

    KeyPoint kp = keypoints2[4];
    Point2f p = kp.pt;
    float i = p.y;

    //---FEATURE EXTRACTION----

    //extracted descriptors
    cv::Mat descriptors1,descriptors2;

    OrbDescriptorExtractor extractor;
    extractor.compute(image1,keypoints1,descriptors1); //prevFrame
    extractor.compute(image2,keypoints2,descriptors2); //currFrame

    //----FEATURE MATCHING----

    //BruteForceMacher

    BFMatcher matcher;

    std::vector< cv::DMatch > matches; //result of matching descriptors
    std::vector< cv::DMatch > goodMatches; //result of sifting matches to get only 'good' matches

    matcher.match(descriptors1,descriptors2,matches);

    //----HOMOGRAPY - WARP-PERSPECTIVE - PERSPECTIVE-TRANSFORM----

    double maxDist = 0.0; //keep track of max distance from the matches
    double minDist = 80.0; //keep track of min distance from the matches

    //calculate max & min distances between keypoints
    for(int i=0; i<descriptors1.rows;i++){
        DMatch match = matches[i];

        float dist = match.distance;
        if (dist<minDist) minDist = dist;
        if(dist>maxDist) maxDist=dist;
    }

    //get only the good matches
    for( int i = 0; i < descriptors1.rows; i++ ){
        DMatch match = matches[i];
        if(match.distance< 500){
            goodMatches.push_back(match);
        }
    }

    std::vector< Point2f > obj;
    std::vector< Point2f > scene;

    //get the keypoints from the good matches
    for( int i = 0; i < goodMatches.size(); i++ ){

        //--keypoints from image1
        DMatch match1 = goodMatches[i];
        int qI1 = match1.trainIdx;
        KeyPoint kp1 = keypoints2[qI1];
        Point2f point1 = kp1.pt;
        obj.push_back(point1);

        //--keypoints from image2
        DMatch match2 = goodMatches[i];
        int qI2 = match2.queryIdx;
        KeyPoint kp2 = keypoints1[qI2];
        Point2f point2 = kp2.pt;
        scene.push_back(point2);

    }

    //calculate the homography matrix
    if(goodMatches.size() >=4){
        Mat H = findHomography(obj,scene, CV_RANSAC);

        warpPerspective(image2,warpResult,H,Size(image1.cols,image1.rows));
    }
}
Rui Marques
  • 6,898
  • 3
  • 48
  • 82
user3019612
  • 249
  • 2
  • 7
  • 14

2 Answers2

0

Concerning your first question, I think the distortion you are mentioning is due to the fact that:

  • you estimate the homography H mapping coordinates in image1 to coordinates in image 2. When you do Mat H = findHomography(obj,scene, CV_RANSAC);, obj are point coordinates in image 1 and scene are point coordinates in image 2.

  • you then use H in function warpPerspective as if it were mapping coordinates in image 2 to coordinates in image 1, since you expect it to transform image2 into warpResult, which I am guessing should be stitched to image1.

Hence, you should estimate the homography H as follows: Mat H = findHomography(scene, obj, CV_RANSAC);.

Concerning your second question, I think it is raised by this instruction:

matcher.match(descriptors1,descriptors2,matches);

The error says that the expression

(type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U))

was found to be false, whereas it should be true for the function to work. A similar problem was solved here: before calling the match function, you need to manually check if the following is true:

(descriptors1.type()==descriptors2.type() && descriptors1.cols==descriptors2.cols)
BConic
  • 8,070
  • 2
  • 25
  • 50
  • Thanks for your response. Your suggestion to the 2nd question works - I don't see the error occurring anymore. However, the problem in the 1st question is still there, after following your suggestion `Mat H = findHomography(scene, obj, CV_RANSAC);` – user3019612 Feb 24 '14 at 22:39
0

Regarding (1), my guess is that the homography you estimated is based on bad matches.

First I would start by using ORB detector instead of FAST, then change findHomography ransacReprojThreshold parameter. The default value is 3, details:

ransacReprojThreshold:

Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC method only). That is, if:

| dstPoints_i - convertPointsHomogeneous(H * srcPoints_i) | > ransacReprojThreshold

then the point i is considered an outlier. If srcPoints and dstPoints are measured in pixels, it usually makes sense to set this parameter somewhere in the range of 1 to 10.

In other words, assuming the default 3 pixels, if after applying the homography to a srcPoint, its distance to the dstPoint is more than 3 pixels, that pair is considered an inlier (i.e: good).

This is just the beginning, it will also help for you to find a better filter for good matches and good homography, you will find several answers regarding those:

OpenCV Orb not finding matches..

How can you tell if a homography matrix is acceptable or not?

Community
  • 1
  • 1
Rui Marques
  • 6,898
  • 3
  • 48
  • 82
  • Thanks for your response. I've tried your suggestion, but unfortunately it did not change much. Indeed, I first tried on 2 images, but the warping was not perfect - so unsuprisingly, on a live video feed the output is not as expected. I've tried several algorithm combinations, but I don't think it makes too much of a difference (apart from perhaps, speed). I'm also limited by not having access to SURF/SIFT on OpenCV Android. I will research the topics you have linked. – user3019612 Feb 26 '14 at 00:28
  • But is the warp allways wrong? At least some video frames should result ok. – Rui Marques Feb 26 '14 at 00:33
  • On some video frames, it is fine. But most of the time, it is skewed and the image is generally distorted. I'm trying to use the output of the warpPerspective to detect novelties using background subtraction - so the outputted novelties are generally distorted when I start moving the camera around slightly. – user3019612 Feb 26 '14 at 00:40
  • That is consistent with the answer I gave you. – Rui Marques Feb 26 '14 at 09:55