23

During camera calibration, the usual advice is to use many images (>10) with variations in pose, depth, etc. However I notice that usually the fewer images I use, the smaller the reprojection error. For example with 27 images, cv::calibrateCamera returns 0.23 and with just 3 I get 0.11 This may be due to the the fact that during calibration we are solving a least squares problem for an overdetermined system.

QUESTIONS:

  1. Do we actually use the reprojection error as an absolute measure of how good a calibration is? For example, if I calibrate with 3 images and get 0.11, and then calibrate with 27 other images and get 0.23 can we really say that "the first calibration is better"?

  2. OpenCV uses the same images both for calibration and for calculating the error. Isn't that some form of overfitting? Wouldn't it be more correct if I actually used 2 different sets -one to compute the calibration parameters and one to compute the error-? In that case, I would use the same (test) set to calculate the error for all my calibration results from different (training) sets. Wouldn't that be more fair?

Sassa
  • 3,177
  • 2
  • 23
  • 42

1 Answers1

16

Sorry if this is too late - only just saw it.

The error is the reprojection of the fit. So find points on an image, calculate the real world model, recalculate where those points would be on the image given the model - report the difference. In a way this is a bit circular, you might have a model that is only correct for those few images which would then report a very good error while giving it lots of images will make a much more generally correct model - but will have a larger error, just because you are trying to stretch it to fit a much bigger space.

There does come a point where adding more images doesn't improve the fit, and may add noise since points are never detected perfectly. What's important is to provide a bigger set of paramters, more angles and positions, rather than equivalent data

Using the same image set to predict the error isn't really a problem because the fit does have a real meaning in terms of actual physical lens parameters - it's not like training/testing a neural net on the same data.

edit: a better calibration routine than opencv (although based on the same concept) is included in 3D-DIC (free but not OSS, register for the site to get the download link) specifically see the calibration manual.

Martin Beckett
  • 90,457
  • 25
  • 178
  • 252
  • 1
    Thanks for responding, it's not too late, it was more of a general question. If I understand correctly from your first paragraph, you are saying that sometimes a camera model happens to be described better with a set of (maybe fewer) images, right? Also, one of my conclusions is that you can use the reprojection error to determine if the calib is good (err=[0,1]) or bad (err>1), but in the interval [0,1] you cannot really be sure that choosing the smallest will be the best. Would you say that this is correct, or I got it totally wrong? – Sassa Sep 23 '12 at 17:58
  • Because if I am not mistaken, when I was getting a 0.11 error (which is very low) with only 3 images, and undistorted the image using these parameters, the undistorted result was not very good in comparison with the one with the 0.23 error from 27 images. – Sassa Sep 23 '12 at 18:00
  • 1
    A camera model based on fewer images will be good - but only for that small set of positions and angles. So it's a very good local fit but bad global fit. As you add more images (upto a point) you get closer to a global fit but the local fit may be worse – Martin Beckett Sep 23 '12 at 18:50
  • Thanks, understood. I was overstretching this in my hunt for "the perfect calibration"! – Sassa Sep 23 '12 at 19:20
  • 1
    @Bob - yes, it's the old "a man with a watch knows the time, a man with two watches is never certain". ps see updated link in answer – Martin Beckett Sep 23 '12 at 19:23
  • is opencv's reprojection error in object coordinate system or in pixel coordinates? – Micka Jun 26 '20 at 11:53