Wrong field of view?

Hello all,

I have noticed that there is something wrong with field of view in OpenGL. Anyone can repeat what I have done:

  1. Create flat surface with checkerboard pattern.
  2. Render several (>6) images of this structure using OpenGL with certain field of view (slightly varying tilt between the images).
  3. Use these images for camera calibration (for example, Caltech’s Matlab Camera Calibration Toolkit (www.vision.caltech.edu/bouguetj/calib_doc/) - this gives focal length in pixels.
  4. Calculate field of view using tan(FoV_v/2)=(Height/2)/FocalLength, where Height is number of rows in images, and FoV_v is vertical FoV as typical for OpenGL.
    The results differ. Can someone explain this discrepancy?

I’ve looked through the posts on these forums and it seems that nobody tried to close the loop “intrinsics-rendering-calibration”.
Note that I understand that OpenGL simulates pin-hole camera (no lens-related distortions), and that “true” camera
acquisition should use ray-tracing.

Hope that some gury already knows the answer… fingers crossed…

Yuri

I really have no time to read through the Matlab’s CCT documentation. Maybe you could concertize what’s the difference between CCT and your calculation of vertical FOV.

How can the focal length be expressed in pixels?

How did you come to this formula?
I’m using the following formula for FOV calculation:
FOV = 2.0 * atan( SensorDimension / ( 2.0 * FocalLengthMin * Zoom ) )

This worked quite well in several augmented reality applications.

Hi Alexandar,
Caltech’s toolkit is a respectable software (the creator was invited to join Intel to develop similar software for OpenCV) and has been used for ages. (Which means that OpenCV can be used for camera calibration too, and will give the same result.)

Focal length is expressed in pixels, because sensor dimensions are expressed in pixels. One may use real units, if sensor is measured in the same units.

The formula you use is exactly the one I use, just my “FocalLength” is your “FocalLengthMin*Zoom”.

Assuming square pixels, ratio of sensor dimensions equals ratio of fields of view. Fot frame size 640*480 and 45 degrees vertical field of view, horizontal field of view is 60 degrees.

I just cannot believe that nobody ever tried to calibrate virtual OpenGL camera by using rendered images of a standard calibration pattern and compare the result with input parameters…
I have all the data and am happy to share it.

I didn’t say it is not a respectable software, just that I have no time to try it.
On the other hand, I have expected you to explain where the problem is. You just noticed that there is an issue.
Please take a look at the following links:
GeoScopeAVE: GIS-based augmented virtual environment
GeoScopeAVS: GIS-augmented video surveillance
GeoScopeAVS 2: GIS-based video surveillance

In all these movies, the “OpenGL camera” is calibrated by real PTZ cameras. Matching is quite acceptable, isn’t it?

Ups, I found where you made a mistake!
You assume that W/HFOV = H/VFOV, where HFOV is horizontal FOV, VFOV - vertical FOV, H - screen height and W - screen width. That is not correct!
If VFOV is 45 degrees, the HFOV is 57.82 degrees for 4:3 aspect. Not 60!

The correct formula is the following:
W/(tan(HFOV/2)) = H/(tan(VFOV/2))

Hence:
HFOV = 2 * atan( (W/H) * tan(VFOV/2) )

Hi Aleksandar,
you are absolutely right, and I cannot explain why I didn’t think about it.

I knew there will be a guru to solve the puzzle…