0

This question is alike this one, but I can't find what's wrong in mine. I am trying to use openCV's camera calibrateCamera to find the location of the camera which in the case is in an airplane using the known positions of the runway corners:

import cv2
objectPoints=np.array([[posA,posB,posC,posD]], dtype='float32')
imagePoints=np.array([[R0,R1,L1,L0]],dtype='float32')
imageSize=(1152,864)
retval, cameraMatrix, distCoeffs, rvecs, tvecs = cv2.calibrateCamera(objectPoints, imagePoints, imageSize, None, None)

#rotation matrix
R_mtx, jac = cv2.Rodrigues(np.array(rvecs).T)

cameraPosition = -np.matrix(R_mtx).T * np.matrix(tvecs[0])

cameraPosition

Where [R0,R1,L1,L0] are the corners positions in pixels at the image and [posA,posB,posC,posD] are the positions of the runway in the real world. I get as answer for this code:

matrix([[ -4.7495336 ], #x
        [936.21932548], #y
        [-40.56147483]]) #z

When I am supposed to get something like :

#[x,y,z]
[-148.4259877253941, -1688.345610364497, 86.58536585365854]
Zeor137
  • 103
  • 1
  • 2
  • 10
  • Why is it [[R0,R1,L1,L0]]? Should it not be [R0,R1,L1,L0] ? This way you will reduce your dimensions by 1. – Dennis Kozevnikoff Jul 08 '20 at 18:12
  • Isn't it supposed to be a list with points from many pictures to calibrate the camera with more precision? Here I am only using 1 – Zeor137 Jul 08 '20 at 18:15

0 Answers0