0

After some procedure for std::vector<std::pair<cv::Point, cv::Point>> tempEndPoints, I'll get the following first and second parameter:

first: [139, 113][201, 310][223, 339][297, 437][323, 472][381, 465][408, 413][484, 291][505, 151]
second: [139, 210][201, 692][223, 664][297, 550][323, 523][381, 544][408, 580][484, 699][505, 254]

Now I'm stuck at wrapPerspective. Each of those "corner" points creats a small image, which has a individual size (it was earlier created by textureImage). Everything is stored and this is also fine. But now I want to dewrap all those "small images" inside those points, which I mentioned before.

for (int i = 0; i < tempEndPoints.size() - 1; i++) {    
    //do some stuff...
    cv::Vec3b zero(0, 0, 0);

    cv::Mat_<cv::Vec3b> dewrapped(textureImage.rows, textureImage.cols, zero);

    cv::Point2f srcPts[] = {
        Point2f(tempEndPoints[i].first),
            Point2f(tempEndPoints[i + 1].first),
            Point2f(tempEndPoints[i + 1].second),
            Point2f(tempEndPoints[i].second) };

    std::cout << "srcPoints: " << tempEndPoints[i].first << ", " << tempEndPoints[i + 1].first
        << ", " << tempEndPoints[i + 1].second << ", " << tempEndPoints[i].second << "\n";

    const cv::Point2f dstPts[] = {
        cv::Point2f(tempEndPoints[i].first),
        cv::Point2f(tempEndPoints[i + 1].first.x, tempEndPoints[i].first.y),
        cv::Point2f(tempEndPoints[i + 1].second.x, tempEndPoints[i].second.y),
        cv::Point2f(tempEndPoints[i].second)};

    std::cout << "dstPoints: " << tempEndPoints[i].first << ", " << tempEndPoints[i + 1].first.x << "-" << tempEndPoints[i].first.y
        << ", " << tempEndPoints[i + 1].second.x << "-" << tempEndPoints[i].second.y << ", " << tempEndPoints[i].second << "\n";

    cv::Mat homography_matrix = cv::getPerspectiveTransform(srcPts, dstPts);

    cv::warpPerspective(textureImage, dewrapped, homography_matrix, textureImage.size());
    cv::imshow("dewrap", dewrapped);

Followed this blog and nothing seems to happen - well, I just get a black image. The order of the points in srcPts and dstPts is:

1---2
|   |
4---3

Looking at this question, I tried another order, but I also results in a black image. So now there's the order as in the blog, which I mentioned before. What am I doing wrong in my code?

[EDIT:] The output for srcPt and dstPt is:

srcPts: [20, 54], [123, 83], [123, 188], [20, 65]
dstPts: [20, 54], [123, 54], [123, 65], [20, 65]
srcPts: [123, 83], [181, 362], [181, 718], [123, 188]
dstPts: [123, 83], [181, 83], [181, 188], [123, 188]
srcPts: [181, 362], [278, 412], [278, 569], [181, 718]
dstPts: [181, 362], [278, 362], [278, 718], [181, 718]
srcPts: [278, 412], [290, 428], [290, 557], [278, 569]
dstPts: [278, 412], [290, 412], [290, 569], [278, 569]
srcPts: [290, 428], [321, 469], [321, 525], [290, 557]
dstPts: [290, 428], [321, 428], [321, 557], [290, 557]
srcPts: [321, 469], [373, 476], [373, 534], [321, 525]
dstPts: [321, 469], [373, 469], [373, 525], [321, 525]
srcPts: [373, 476], [387, 447], [387, 552], [373, 534]
dstPts: [373, 476], [387, 476], [387, 534], [373, 534]
srcPts: [387, 447], [407, 415], [407, 579], [387, 552]
dstPts: [387, 447], [407, 447], [407, 552], [387, 552]
srcPts: [407, 415], [421, 392], [421, 597], [407, 579]
dstPts: [407, 415], [421, 415], [421, 579], [407, 579]
srcPts: [421, 392], [531, 90], [531, 212], [421, 597]
dstPts: [421, 392], [531, 392], [531, 597], [421, 597]
Cœur
  • 32,421
  • 21
  • 173
  • 232
Unnamed
  • 93
  • 8
  • and yes, I also already debugged it. Can't find any error, because I get the right coordinates, the just a black result. :( – Unnamed Aug 14 '17 at 15:07
  • recently i answered similar question. [take a look at it](http://answers.opencv.org/question/170615/getting-picture-contours/). if you post your whole code and test image let me take a look – sturkmen Aug 14 '17 at 15:25
  • @sturkmen but there's nothing about `wrapPerspective`? – Unnamed Aug 14 '17 at 15:28
  • do you see my comment for your question? – Saeed Masoomi Aug 14 '17 at 15:43
  • see https://github.com/sturkmen72/opencv_samples/blob/master/171913.cpp#L200-L201 – sturkmen Aug 14 '17 at 15:47
  • i guess you need to sort points before warping see https://github.com/sturkmen72/opencv_samples/blob/master/171913.cpp#L84-L113 – sturkmen Aug 14 '17 at 15:49
  • @sturkmen why do I need to sort them, if they go through iteration and are builded "step by step"? In the previous procedure, I did, it works flawless without any sorting. – Unnamed Aug 14 '17 at 15:54
  • `cv::Mat homography_matrix = cv::getPerspectiveTransform(srcPts, dstPts);` check values of srcPts and dstPts – sturkmen Aug 14 '17 at 15:56
  • @sturkmen I already did.. but for more clarification, I added them to my question, please take a look at the edited version. – Unnamed Aug 14 '17 at 16:09
  • @sturkmen and as you could see, `srcPt` and `dstPt` is actually fine, first all the original points and than the transformed one. – Unnamed Aug 14 '17 at 16:11
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/151905/discussion-between-sturkmen-and-unnamed). – sturkmen Aug 14 '17 at 16:32
  • textureImage.size() ? – sturkmen Aug 14 '17 at 16:34
  • you do have a cv::waitKey(0) after the imshow, don't you? – Micka Aug 14 '17 at 16:37

0 Answers0