? Pincushion Distortion with a Camera ?

How can I account for a pincushion distortion to properly draw a pan/tilt (or azimuth/elevation) point on my screen?

Let me explain the purpose first. I am trying to predict the location of the Sun in my camera view for the next few minutes/hours. Basically I want to show the path the sun will move across my camera. Right now I’m trying to first test if I can simply draw a point/circle/etc in the current sun location - if I can do that then I can add the points for all subsequent predicted locations too.

I am NOT using Shaders - I’m still developing using the old pipeline so I can’t take advantage of shaders. I use the gluPerspective() to set a camera view based on the desired fields-of-view (vertical/horizontal), and I want to overlay a point on the surface of the view (using 2D positioning) that represents where the Sun is calculated to be in that view.

I know my pan/tilt (or azimuth/elevation) calculations are correct, because I can verify the Sun location if I literally pan/tilt the camera to the desired values and then I see my Sun directly in the middle. But if I leave Pan/Tilt at 0/0 and instead want to draw a dot/point at the pan/tilt location, it tends to be off the further as I move further away from the center. It is most off at the corners. Initially I was trying to simply position the point based on taking the pan value and dividing by the total horizontal field-of-view value and then positioning the point on the screen at that location - same for tilt. Unfortunately, that is not correct given that pan/tilt are really coupled so I thought there might be some mapping I could use to get the correct x,y placement on the screen, but I am having no luck finding it. Do you have any suggestions?

Here is what my camera sees and where my simple calculations were placing the pan/tilt location (yellow dot near the sun).

[ATTACH=CONFIG]1247[/ATTACH]
Here is an example picture of what the pincushion distortion looks like - and I believe this is what I am experiencing.

[ATTACH=CONFIG]1248[/ATTACH]

I wish I could find a mapping of this distortion but it seems like this mapping involves knowing something about the lens, and I know nothing about the OpenGL/gluPerspective lens that is being modeled. (see link for a short description of the mapping)

I thought maybe there was a way to extract some view matrix from OpenGl that I could use to multiply my pan/tilt values to then have them transformed into the camera view but I’m not experienced enough to know if that even works.

My final alternative solution for this is to simply draw the predicted sun positions in the 3D view (i.e. draw more Suns at each of the future predicted points) and not worry about overlaying the position in the 2D screen…but that doesn’t seem as elegant and I already can calculates the pan/tilt current and predicted points so why not use the 2D view to draw those positions - but this pincushion distortion has me scratching my head so any help would be appreciated.

There is no “pincushion distortion” in the sense of what you might get with a wide-angle lens. OpenGL is limited to projective transformations, and a projective transformation always transforms a straight line to another straight line. However, a circle of constant elevation isn’t a straight line (it’s a circle), and its projection won’t be a straight line either. The following shows the view from the centre of a cylinder on which are drawn lines of constant height.

You say

So you want to know the “screen” coordinates of a point given its azimuth and elevation? This should just be a case of converting spherical to Cartesian coordinates. Assuming that X is East, Y is North, Z is up:


X = distance * cos(elevation) * sin(azimuth)
X = distance * cos(elevation) * cos(azimuth)
Z = distance * sin(elevation)

Where azimuth is in radians clockwise from North (i.e. North is zero, East is pi/2), elevation is in radians, zero at the horizon, positive above. The distance can be whatever you want, as will be cancelled out by perspective division.

Given X,Y,Z coordinates, you’d set W=0 (so that the result isn’t affected by translation), transform by the current model-view-projection matrix, divide the transformed X and Y by W to get normalised device coordinates, then transform by the viewport transformation to get window coordinates.

Sorry for the late response - I was sick and then had Jury Duty yesterday so I am only now going to try your suggestion. I initially thought I could convert from spherical to cartesian but that wasn’t giving me that answers I expected - that could be because I didn’t do the second steps you outlined so I’ll try that today/tonight and see what happens. I appreciate the detailed info, hopefully I can follow it all successfully.

My final alternative solution for this is to simply draw the predicted sun positions in the 3D view …
This is what I would do. With Z-buffer on, this approach would show when the sun goes behind the solar array. If you want to show the sun, even when it is behind the solar arrays, draw the arrays, turn Z-buffer off, and render the sun.

I’m having some trouble with the calculations - I’m basically not getting anything reasonable so I thought you could possibly see where I’m going wrong.

Here is my test case picture - ignore the yellow dot right now because it is using the old calculation method and that is wrong (though it looks close in this case because the sun is near the center).

[ATTACH=CONFIG]1255[/ATTACH]
The calculated position of the Sun in this view is:
azimuth = 22.3979145 degrees
elevation = 6.18959868 degrees
distance = 1.0 (can be anything, as you said)

Using your equations I get the following results for the X, Y, Z - and we’ll add W = 0:


X = distance * cos(elevation) * sin(azimuth)      = 0.3788209
Y = distance * cos(elevation) * cos(azimuth)      = 0.9191833
Z = distance * sin(elevation)                    = 0.1076881
W                                                  = 0

For the Model-View and Projection matrices (and viewport) I used these calls (Java):


DoubleBuffer modelview_matrix = DoubleBuffer.allocate(16);
gl.glGetDoublev(GL.GL_MODELVIEW_MATRIX, modelview_matrix);

DoubleBuffer projection_matrix = DoubleBuffer.allocate(16);
gl.glGetDoublev(GL.GL_PROJECTION_MATRIX, projection_matrix);

IntBuffer viewport = IntBuffer.allocate(4);
gl.glGetIntegerv(GL.GL_VIEWPORT, viewport);   

The MODELVIEW matrix results in just the identity matrix, while the PROJECTION matrix is the following:


modelview_matrix                                     projection_matrix
=============================           =============================
1.000000, 0.000000, 0.000000, 0.000000,         1.401469, 0.000000, 0.000000, 0.000000, 
0.000000, 1.000000, 0.000000, 0.000000,         0.000000, 1.741162, 0.000000, 0.000000, 
0.000000, 0.000000, 1.000000, 0.000000,         0.000000, 0.000000, -1.000000, -1.000000, 
0.000000, 0.000000, 0.000000, 1.000000,         0.000000, 0.000000, -0.000508, 0.000000, 

If I now transform the X,Y,Z,W my the Model-View-Projection matrix (or simply the ‘projection_matrix’ since the other is identity), I get the following transformed values:


   Xt =  0.530906
   Yt =  1.600447
   Zt = -0.107688
   Wt = -0.000055

So I were to divide Xt, and Yt by Wt I get huge values for them: -9704.79 and -29255.7 respectively. Obviously the PROJECTION matrix must be wrong - the values are causing my ‘Wt’ result to be too small to give proper results. I mean, the next step would be to scale these X, Y values relative to the viewport (which happens to have the following dimensions 0,0,766,617), but these numbers are too wacky to give the correct result as of right now.

BTW, I was retrieving the PROJECTION matrix after I’ve made the gluPerspective() call…maybe I’m retrieving that matrix at the wrong time in the flow? It is sad that I’ve done some rudimentary OpenGL development for years but have not yet fully grasped all of the concepts yet so I must be doing this wrong.

Anyway, if there is something obvious that I’m doing wrong then let me know, else I’ll need to go create a simplified case of my camera and system to see if I can figure it out there first.

I might have to do that if I can’t figure out this doable math above…I know I’m missing something easy.

I was messing with my camera inputs to see how they affect the PROJECTION matrix.

So, if I understand this correctly, the Projection matrix components are linked to my Field-of-View values and to the clipping plane distances. The first value (m[0][0] = 1.401469) is linked to FOVx, and the next value (m[1][1] = 1.741162) is linked to FOVy. The (m[3][2] = -0.000508) value seems to be linked to my near clip plane distance.

I guess what is confusing is that I take the PROJECTION matrix (since the ModelView is identity) and multiply the X,Y,Z,W vector by this matrix to get the new Xt,Yt,Zt,Wt values…because that near clip plane distance value is small and all other values in that bottom matrix row are zero then this value is the only one used to calculate the new Wt value. Hence it is super small, and I don’t see why the clipping plane distance should have any affect on my output. I can understand the FOVx and FOVy values affecting things because those effectively resize the viewport but why the clipping plane distance.

Even if I accidentally had the matrix transposed wrong (ie. had possibly flipped the rows/columns in my matrix) I would only change W to be exactly equal to Z…is that what I want? Even if I do transpose the matrix the values still don’t appear to give me meaningful results. I now calculate X and Y to be -4.913 and -14.577.

I’m obviously grasping at straws here trying different permutations and hoping I accidentally hit upon expected results.

So far I understand the first part where you gave the following equations:


X = distance * cos(elevation) * sin(azimuth)
Y = distance * cos(elevation) * cos(azimuth)
Z = distance * sin(elevation) 

These are the results of multiplying the Azimuth/Elevation matrices with the <1,0,0> vector. Basically this takes a unit vector along the X direction and rotates it by the azimuth (yaw) and elevation (pitch) matrices to get the new pointing direction for that vector. So now the goal would be to use the PROJECTION matrix on this new vector to get the new coordinates that are then transformed by the viewport coordinates.

So I guess I need to figure out my PROJECTION matrix - that must be the wrong one or I have my axes possibly transposed in my system.

btw, this is the perspective projection matrix equations based on FOV and near/far planes:

[ATTACH=CONFIG]1258[/ATTACH]

Actually, I think that bottom calculation (-ZfZn/(Zf-Zn)) is not what I am getting…I am getting Twice that result… so it should be (-2Zf*Zn/(Zf-Zn)) as also shown here:

[http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter42.html

[ATTACH=CONFIG]1259[/ATTACH]](http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter42.html)

This is transposed. That may just be an issue with your printing routine (OpenGL uses column-major order by default), but if you’re using your own transformation routines, they need to account for it.

Other than that, it indicates a perspective projection with zNear = 2.54e-4 and zFar much larger (it’s not possible to determine the exact value with the number of digits given). The small value of zNear may be an issue for depth buffer resolution, but it shouldn’t affect the transformations.

The main thing is to ensure that you aren’t treating the matrix as being in row-major order.

The matrix generated by gluPerspective is given in its reference page.

Three differences to note here.

  1. The denominators in the gluPerspective version are negative when zFar>zNear. Conventionally, eye space is a right-handed coordinate system with X to the right, Y up, and Z pointing toward the viewport. Clip space and normalised device space are left-handed coordinate systems with increasing Z corresponding to increasing depth. The visible region has negative eye-space Z coordinates. zNear and zFar are positive (i.e. the near and far planes are at Z=-zNear and Z=-zFar).

  2. Your version is missing a factor of 2 from the Z-W entry.

  3. Your version is transposed. Is it using the DirectX convention of multiplying a row vector on the left, rather than the OpenGL convention of multiplying a column vector on the right?

The other point is that the spherical to Cartesian conversion I gave assumes that +Y is north and +Z is up, while eye space has +Z pointing toward the viewpoint. The model-view matrix should probably include a 90-degree rotation about the X axis to account for the fact that an elevation of zero means facing the horizon rather than downward. Alternatively, you can factor this into the spherical to Cartesian conversion (swap Y and Z, negate Z).

You are spot on as to the zNear/zFar values, and yes, the matrix output is my print statements and I knew I was potentially wrong on the order - I couldn’t remember whether it was row or column major order. And also thank you for all the info - it confirms some of my conclusions. I’ll try to make the adjustments and see if I can calculate things.

I think I almost have it. So far I did a couple of hand calculation tests and comparing the result against some screenshots I took and I’m matching. I need to do a couple more to make sure it’s not dumb luck, but this looks promising. I’ve not coded it up completely yet because the viewport transformation is going to take a little extra coding - axes are flipped, and I’m using different fov values, etc so I have to do some if-else checks in my code but looking good. I’ll let you know if it comes out after a couple more tests and then tomorrow or this weekend I would expect to be able to finish it - I hope. Crossing fingers.

EDIT: Yup, that’s it. I did 3 other hand calculations at different locations away from center and I can match now! Thank you very much. Once I code this all up and clean things up I’ll show some more pics of the result and go over what I did with your help.