I have a flying drone with a downward facing camera and an IR sensor, and am working on SLAM for the drone. The drone only flies over a planar surface. To build a map of the drone's environment using the camera feed and opencv, you need to remove the camera's perspective from the captured images (introduced by height, roll, pitch, and yaw) so that distances in the images are accurate.
I've looked through openCV's methods for this, but they all seem to require a calibration of the camera beforehand to determine the camera's orientation using a script.
Is there any way to reliably transform points in the image so the spacing is constant, if you knew the orientation of the drone (and camera)? I have access to data on the drone's orientation from the IMU.