4

I am working on a project for estimating a UAV (quadcopter) location using optical-flow technique. I currently have a code that is using farneback algorithm from OpenCV. The current code is working fine when the camera is always pointing to the ground.

Now, I want to add support to the case when the camera is not pointing straight down - meaning that the quadcopter now has a pitch / roll / yaw (Euler angles). The quadcopters Euler angles are known and I am searching for a method to compute and apply the transformation needed based on the known current Euler angles. So that the result image will be as if it was taken from above (see image below).

I found methods that calculates the transformation when having 2 sets (source and destination) of 4 corners via findHomography or getPerspectiveTransform functions from OpenCV. But I couldn't find any methods that can do it with knowing only Euler angle (because I don't know the destination image corenrs).

So my question is what method can I use and how in order to transform a frame to be as if it was taken from above using only Euler angles and camera height from ground if necessary?

In order to demonstrate what I need:

enter image description here

The relevant part of my current code is below:

for(;;)
{
    Mat m, disp, warp;
    vector<Point2f> corners;
    // take out frame- still distorted
    cap >> origFrame;
    // undistort the frame using the calibration parameters
    cv::undistort(origFrame, undistortFrame, cameraMatrix, distCoeffs, noArray());
    // lower the process effort by transforming the picture to gray
    cvtColor(undistortFrame, gray, COLOR_BGR2GRAY);

    if( !prevgray.empty() )
    {
        // calculate flow
        calcOpticalFlowFarneback(prevgray, gray, uflow, 0.5, 3/*def 3 */, 10/* def 15*/, 3, 3, 1.2 /* def 1.2*/, 0);
        uflow.copyTo(flow);

        // get average
        calcAvgOpticalFlow(flow, 16, corners);

        // calculate range of view - 2*tan(fov/2)*distance
        rovX = 2*0.44523*distanceSonar*100;     // 2 * tan(48/2) * dist(cm)
        rovY = 2*0.32492*distanceSonar*100;     // 2 * tan(36/2) * dist(cm)

        // calculate final x, y location
        location[0] += (currLocation.x/WIDTH_RES)*rovX;
        location[1] += (currLocation.y/HEIGHT_RES)*rovY;
    }
    //break conditions
    if(waitKey(1)>=0)
        break;
    if(end_run)
        break;
    std::swap(prevgray, gray);
}  

UPDATE:

After successfully adding the rotation, I still need my image to be centered (and not to go outside of the frame window as shown below). I guess I need some kind of translation. I want the center of the source image to be at the center of the destination image. How can I add this as well?

The rotation function that works:

void rotateFrame(const Mat &input, Mat &output, Mat &A , double roll, double pitch, double yaw){
    Mat Rx = (Mat_<double>(3, 3) <<
              1,          0,           0,
              0, cos(roll), -sin(roll),
              0, sin(roll),  cos(roll));
    Mat Ry = (Mat_<double>(3, 3) <<
              cos(pitch), 0, sin(pitch),
              0, 1,          0,
              -sin(pitch), 0,  cos(pitch));
    Mat Rz = (Mat_<double>(3, 3) <<
              cos(yaw), -sin(yaw), 0,
              sin(yaw),  cos(yaw), 0,
              0,          0,           1);

    Mat R = Rx*Ry*Rz;
    Mat trans = A*R*A.inv();

    warpPerspective(input, output, trans, input.size());
}

When I run it with rotateFrame(origFrame, processedFrame, cameraMatrix, 0, 0, 0); I get image as expected:

enter image description here

But when I run it with 10 degrees in roll rotateFrame(origFrame, processedFrame, cameraMatrix, 20*(M_PI/180), 0, 0);. The image is getting out of the frame window:

enter image description here

A. Sarid
  • 3,716
  • 2
  • 23
  • 48
  • "_FindHomography or getPerspectiveTransform functions from OpenCV. But none that were doing it without knowing the corners positions but with known angles_" Are you sure they don't work with corners positions? Because I do my transformation using four corners – Khalil Khalaf May 09 '16 at 14:23
  • it wasn't clear enough, i've edited my post. I meant that I don't know the destination corners but I **do** know the Euler angles. – A. Sarid May 09 '16 at 14:30

3 Answers3

5

If you have a calibration intrinsics matrix A (3x3), and there is no translation between camara poses, all you need to find homography H (3x3) is to construct rotation matrix R (3x3) from euler angles and apply the following formula:

H = A * R * A.inv()

Where .inv() is matrix invertion.

UPDATED:

If you want to center the image, you should just add translation this way: (this is finding the warped position of the center and translation of this point back to the center)

|dx|       | 320 / 2 |
|dy| = H * | 240 / 2 |
|1 |       | 1       |

    | 1 0 (320/2-dx) | 
W = | 0 1 (240/2-dy) | * H
    | 0 0 1          |

W is your final transformation.

Aleksey Petrov
  • 342
  • 2
  • 14
  • Thanks! can you explain more about how to create the calibration intrinsic matrix? Do I need to add also a translation? (and not only rotation). – A. Sarid May 13 '16 at 12:45
  • @A.Sarid intrinsics matrix is constructed from intrinsics: focal lengths, skew, and principal point. You need to calibrate your camera to get these values. You can find more information [there](http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html). Regarding second question - if you want to have view of the plane from the same point, but from different angle, you don't need to add translation. – Aleksey Petrov May 13 '16 at 14:00
  • I've updated my post - please take a look. The image is getting out of the frame, even in small angle. Are you sure I don't need any translation? – A. Sarid May 13 '16 at 15:02
  • 1
    Firstly, you have an error in rotation matrix construction. Plase refer to string (48) [here](http://mathworld.wolfram.com/EulerAngles.html). Secondly, what is your A matrix? How did you get it? – Aleksey Petrov May 13 '16 at 15:21
  • Does it matter? The final `R` is what matters, for example [here](http://mathworld.wolfram.com/RotationMatrix.html) they use different convention. Second, I have the intrinsic matrix from calibration I did for this camera before. – A. Sarid May 13 '16 at 15:37
  • @A.Sarid Carefully look at the signs before sin(). Are you sure, your calibration matrix is correct? Have you checked it somehow? – Aleksey Petrov May 13 '16 at 15:48
  • @A.Sarid On the second image you have scaling. A list of paper is ~1.5 times bigger. So, I think, your calibration matrix is incorrect. – Aleksey Petrov May 13 '16 at 16:00
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/111878/discussion-between-a-sarid-and-aleksey-petrov). – A. Sarid May 13 '16 at 16:04
  • I still need the translation, I want the image to be centered. Please see my update above. – A. Sarid May 15 '16 at 10:07
  • I've already answered you in the chat how to translate the image. In your function rotateFrame you should change trans matrix just before warping this way: trans = [1, 0, dx, 0, 1, dy, 0, 0, 1 ] * trans – Aleksey Petrov May 16 '16 at 08:45
  • Yes, thanks, but how can I know the values of dx, dy in order to make my image always centered? – A. Sarid May 16 '16 at 08:48
2

I came to a conclusion that I had to use the 4x4 Homography matrix in order to be able to get what I wanted. In order to find the right homography matrix we need:

  1. 3D Rotation matrix R.
  2. Camera calibration intrinsic matrix A1 and its inverted matrix A2.
  3. Translation matrix T.

We can compose the 3D rotation matrix R by multiplying the rotation matrices around axes X,Y,Z:

Mat R = RZ * RY * RX  

In order to apply the transformation on the image and keep it centered we need to add translation given by a 4x4 matrix, where dx=0; dy=0; dz=1 :

Mat T = (Mat_<double>(4, 4) <<
         1, 0, 0, dx,
         0, 1, 0, dy,
         0, 0, 1, dz,
         0, 0, 0, 1);

Given all these matrices we can compose our homography matrix H:

Mat H = A2 * (T * (R * A1))

With this homography matrix we can then use warpPerspective function from OpenCV to apply the transformation.

warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);

For conclusion and completeness of this solution here is the full code:

void rotateImage(const Mat &input, UMat &output, double roll, double pitch, double yaw,
                 double dx, double dy, double dz, double f, double cx, double cy)
  {
    // Camera Calibration Intrinsics Matrix
    Mat A2 = (Mat_<double>(3,4) <<
              f, 0, cx, 0,
              0, f, cy, 0,
              0, 0, 1,  0);
    // Inverted Camera Calibration Intrinsics Matrix
    Mat A1 = (Mat_<double>(4,3) <<
              1/f, 0,   -cx/f,
              0,   1/f, -cy/f,
              0,   0,   0,
              0,   0,   1);
    // Rotation matrices around the X, Y, and Z axis
    Mat RX = (Mat_<double>(4, 4) <<
              1, 0,         0,          0,
              0, cos(roll), -sin(roll), 0,
              0, sin(roll), cos(roll),  0,
              0, 0,         0,          1);
    Mat RY = (Mat_<double>(4, 4) <<
              cos(pitch),  0, sin(pitch), 0,
              0,           1, 0,          0,
              -sin(pitch), 0, cos(pitch), 0,
              0,           0, 0,          1);
    Mat RZ = (Mat_<double>(4, 4) <<
              cos(yaw), -sin(yaw), 0, 0,
              sin(yaw),  cos(yaw), 0, 0,
              0,          0,       1, 0,
              0,          0,       0, 1);
    // Translation matrix
    Mat T = (Mat_<double>(4, 4) <<
             1, 0, 0, dx,
             0, 1, 0, dy,
             0, 0, 1, dz,
             0, 0, 0, 1);
    // Compose rotation matrix with (RX, RY, RZ)
    Mat R = RZ * RY * RX;
    // Final transformation matrix
    Mat H = A2 * (T * (R * A1));
    // Apply matrix transformation
    warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);
}

Result: enter image description here

A. Sarid
  • 3,716
  • 2
  • 23
  • 48
  • I stumbled across your solution doing similar things, did you come up with a way to solve the cropping issue? – DeusExMachina25 Jan 05 '18 at 00:18
  • @DeusExMachina25 what do you mean by cropping issue? This solution above is the most updated solution I've used. Maybe there are other better / simpler ways to do this but this one above did the job as described in the question. – A. Sarid Jan 09 '18 at 08:47
  • I ended up developing a different solution but used yours to learn. I was referring to the fact that the image was place in a box that wasn't large enough, as imaged in your post. Part of the image gets clipped by the bounding box. – DeusExMachina25 Aug 08 '18 at 20:26
0

This how I do it in Eigen and using 4 corners:

// Desired four corners
std::vector<Eigen::Vector2f> Normalized_Reference_Pattern = { Eigen::Vector2f(0, 0), Eigen::Vector2f(0, 2), Eigen::Vector2f(2, 0), Eigen::Vector2f(2, 2) }; 

// Current four points
std::vector<Eigen::Vector2f> CurrentCentroids = { /* Whatever four corners you want, but relative sueqnece to above */ };

// Transform for current to desired
auto Master_Transform = get_perspective_transform(CurrentCentroids, Normalized_Reference_Pattern);

// abilitu to use the same tranformation for other points (other than the corners) in the image
Eigen::Vector2f Master_Transform_Centroid = Master_Transform * Eigen::Vector2f(currentX, currentY);

And here is my black box:

Eigen::Matrix3f get_perspective_transform(const std::vector<Eigen::Vector2f>& points_from,const std::vector<Eigen::Vector2f>& points_to)
{
    cv::Mat transform_cv = cv::getPerspectiveTransform(
        convert::to_cv(points_from), 
        convert::to_cv(points_to));

    Eigen::Matrix3f transform_eigen;
    cv::cv2eigen(transform_cv, transform_eigen);
    return transform_eigen;
}
Khalil Khalaf
  • 8,448
  • 7
  • 47
  • 92
  • Thanks for sharing this! But still, I can't understand how do I know the desired 4 corners if I have only the Euler angles of the quadcopter. – A. Sarid May 09 '16 at 14:54
  • @A. How do you store the image in your system? Can't you just take the corner points `[0,0]`, `[0, WidthSize-1]`, `[Height-1, 0]` and `[Height-1, Width-1]` ? – Khalil Khalaf May 09 '16 at 14:57
  • Yes, but that is my source, the destination corners has to be some kind of trapeze if I understand it right. – A. Sarid May 09 '16 at 14:59
  • You choose the desired corners, and it could be any. In my example, I choose the new "image" to be 2x2; Where `Eigen::Vector2f(0, 0), Eigen::Vector2f(0, 2), Eigen::Vector2f(2, 0), Eigen::Vector2f(2, 2)` are my "destination" corners, then my current ROI's points will be transofrmed to that. In your case, your ROI is the whole image – Khalil Khalaf May 09 '16 at 15:04
  • Ok. So how does the transformation know the angle of the image? If for example my camera was 30 degrees from ground. I guess that there has to be some use of rotation matrix or something similar that uses the euler angles. – A. Sarid May 09 '16 at 15:11
  • I don't know about that.. (maybe because I didn't need angles). I just tell my system: _see these four corners_ ... _Good_ .. _Now Transfer them to these "Virtual" Four corners_. Result: 2d "virtual" image, which is flat "angle zero from the camera" – Khalil Khalaf May 09 '16 at 15:16
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/111450/discussion-between-a-sarid-and-firststep). – A. Sarid May 09 '16 at 15:26