Keeping track of camera position

I’m using a “polarview” function similar to that used in the redbook in ch.3. I’d like to keep track of the camera’s position (x,y,z) when the user moves around, but I’m having a tough time doing so. The standard spherical coordinate formulas aren’t working right, and I tried calling glGetDoublev with GL_MODELVIEW_MATRIX then figuring out the x,y,z coords from the matrix, but I get strange results there too.

Is there an easy way to keep track of the camera when the user moves it?

Just browsed the gametutorials.com. and found that site just had added tutorial about camera tracking.I don’t read myself and so i don’t know if that tutorial do any help to you .

I did check out gametutorials.com and the new tutorial they have is on moving the camera with the mouse a la Quake. It is interesting but it is for Direct3D. They have several OpenGL tutorials geared towards camera placement and tracking, but none that quite works with the “polarview” function from the red book.

This is such a tricky problem - and it has taught me that I really know so little about geometry!

Can you post some code? I’m working on a similar problem. Maybe we can help each other. I had trouble getting meaningful values out of the MV matrix as well, but I never figured out what I was doing wrong. I scrapped my code and pretty much started from scratch last night…keeping a matrix with each object, including the camera. Each object keeps track of its own transformations/animation. When I get it working I’ll try and post a summary here.

you guys are gonna kill me!here is what you do to get the cameras position(NOTE:POSITION NOT DIRECTION!!!):

double m[16];

glGetDoublev(GL_MODELVIEW_MATRIX,m);

the camera position is simply:

pos_x = m[12];
pos_y = m[13];
pos_z = m[14];

fellas,

why are you trying to reverse engineer the modelview matrix?

keep track of the camera location/orientation yourself so you don’t NEED to extract the details from the matrix, THEN build the matrix FROM those parameters; nto the otherway around.

this has been Another Handy Hint from John The Handy Hint Guy.

I tried exactly what Mihail tried and got different wildly different values every frame. Every other frame they were consistent. I never could figure out what was going on, so now I’m completely refactoring my code.

John - BTW, what’s wrong with reverse engineering the MV matrix? Isn’t that supposed to work?

[This message has been edited by starman (edited 07-07-2003).]

Hello,

well… yes, yes it is. It can ~also~ work, I think is a more elegant answer. But why reverse engineer a matrix when you can work with the parameters directly? Wouldn’t it be better to say what you mean (ie. in terms of camera parameters) and then find a new way to say it (in terms of matrices)?

A naive implementaion of inverse engineering is to simple multiply the default camera parameters by the inverse of the opengl matrix stack… but then you’d have to compute the inverse. Why do that when you can just automagicaly KNOW the parameters?

Its like… keeping track of a wordprocessor by drawing text to the screen and then reading back the text from the screen when you want to edit your text :slight_smile: or something like that, anyway

cheers
John

John - Let me tell you my problem and perhaps you can suggest a solution. I am writing a simulation of the Jupiter system: the sun at (0, 0, 0), Jupiter orbiting the sun, and the four major moons orbiting Jupiter. What I would like to do is place the camera at any of the four moons and have it orbit along with them (i.e. the view from Io, the view from Europa, etc.).

My first attempt was to use gluLookAt() using the position of the moon as my “eye” and Jupiter’s position as my “look”. I used the modelview matrix values for these positions, each obtained after transforming the object in question. But the values for the positions seemed to toggle back and forth between two wildly different values. I figured I was doing something wrong somewhere else in the code and decided to refactor.

How would you solve the original problem? Maybe there’s a much easier approach that I’m not considering. Thanks very much for you help!

Regarding your jupiter problem, u will be using something like

glPushMatrix()
glTranslatef(coords of moon);
// draw moon
glPopMatrix();

This means that u will probably be having the coords of the moon and Jupiter in some variables. Use these variables to position and orient the camera.

I’ll try to code this myself and let u know if I meet with any success.

CZEP, since I haven’t yet worked on the polarview system, I can’t offer any help. My apologies for deviating from ur question.
Regards.

Abhijeet.

[This message has been edited by abhijeet_maharana (edited 07-09-2003).]

Abhijeet - Thanks, but I think I got it. I compute the transformation matrix of each object in the system (jupiter and the moons) and before drawing, I use Jupiter’s position as my “look”, and (Jupiter + moon) position as my “eye” and call gluLookAt(). I get the relative positions of the objects from elements 12, 13, and 14 in their respective transformation matrices.

Thanks to all who have helped on this thread and in others. I’ve been fiddling with this bug for over a week, but the good thing that came out of it is that I really understand transformations a lot better now.

Starman – it sounds like you’ve already sorted out your camera function, but I thought I’d tell you about this method so you can keep it in mind for future projects.

I can see what effect you’re after and why you’re analysing the modelview matrix to feed into gluLookAt. I agree that in this context it IS a good idea. (My discussion earlier was based on an assumption that you’re moving the camera with gluLookAt + translation + rotation and then trying to reverse engineer the movemements rather than precompute them and pass through to gluLookAt).

Anyway, the idea I wanted to tell you about was… arguably a “more elegant” way of getting back appropriate values. This is the way I’d do it if I was setting up the system, anyway.

Suppose you set up your geometry like this:

glLoadIdentity();
renderSun();
glRotatef(earthorbitrotation, 0.0, 1.0, 0.0);
glTranslatef(earthorbitdistance, 0.0, 0.0);
renderEarth();
glRotatef(moonorbitrotation, 0.0, 1.0, 0.0);
glTranslatef(moonorbitdistance, 0.0, 0.0);
renderMoon();

where renderSun() et al all draw an insanely big sphere around the origin of the local coordinate system. You can see how the transformation stack so that the moon orbits the earth and the earth orbits the sun.

Now, if i wanted to set up the camera system to be at the moon looking at the sun, then one way—which you’ve sorted out—is to grab the opengl transformation matrix and extract the world… uh, universe position of the moon from the fourth column. But if you think about what you’re REALLY doing is working out how the origin of the moon’s local coordinate system is being mapped by the matrix stack, right? So, what you’re effectively finding is

m11 m12 m13 m14   0
m21 m22 m23 m24   *   0
m31 m32 m33 m34       0
m41 m42 m43 m44       1

where mij is the modelview matrix when we’re rendering the moon. The location turns out to be the fourth column since the first three columns are obliterated by the mul by 0. The reason why I’m pointing THIS out is because you can then think about DIFFERENT transformations. Suppose you want the view of a person standing at so many degrees latitude and so many degrees longitude on the moon’s surface looking at the sun… what do you do then? you can’t simply rip the fourth column out of the matrix because we’re no longer talking about the origin of the moon.

The answer is we can compute a new point in the moon’s coordinate system and multiply that the matrix and plug the answer into gluLookat. I can’t remember which way longitude/latitude work out, but suppose the first is rotate about y and the second is rotate about x (for argument’s sake), then you could do this:

pos=M * Ry * Rx * T * [ 0 0 0 1]' 

where M is the moon modelview matrix from above, Ry is the rotate about y degrees longitude matrix, Rx is the rotate about x degrees latitude matrix, T is the translation matrix with the vector <0, 0, sealevel> (to shift the observer to some point on the moons sealevel (if it had a sea)) and 0 is the origin. Actualy, we could just axe the T and the 0, 0, 0, 1 and replace it by 0, 0, sealevel, 1… but i’m confident you can see that.

Oh, and something else I didn’t mention… all this matrix multiplication is why people like keeping track of their own modelview matricies in s/w so you can precompute this kind of stuff without using opengl and reading it back from the pipe (which can be slow). So… that multiplication above would be written in my program by just

Point3D pos=M*Geometry::RotateY(y)*Geometry::RotateX(x)*Vector3D(0.0, 0.0, z);
where M is built in a similar way but for the moon transform.

Does that make sense?

cheers
John

John - I think we’re on the same page. I’m not calling glGetFloatv() anymore to get the current GL modelview matrix - I’m getting the position vector out of Jupiter’s and the moon’s local object matrix, which I build manually inside the objects.

look = Jupiter’s position
eye = Jupiter’s position + moon position

As far as viewing from a point other than the origin of the moon, I plan to implement something similar next. I want the camera to orbit the moon itself. For this, I was going to implement a point (or object) that orbits the moon in the same way the moon orbits Jupiter, and use its position as my camera position.

Isn’t that kind of what your proposing? I do get what you’re saying about transforming a point rather than just plucking the values out of the matrix. Yours is definitely the more elegant and intuitive approach.

[This message has been edited by starman (edited 07-09-2003).]

I’m getting the position vector out of Jupiter’s and the moon’s local object matrix, which I build manually inside the objects.

excellent, excellent. soon world domination will be ours!
:slight_smile:

<stuff from above> … Isn’t that kind of what your proposing?

why yes, yes it is. In that case you’d shift the point higher than sealevel in the same process as above.

I do get what you’re saying about transforming a point rather than just plucking the values out of the matrix. Yours is definitely the more elegant and intuitive approach.

cheers! :slight_smile:

good luck. may the force be with you!

John

Side question, John: Is there any merit in taking this one step further? i.e., instead of building my own matrices (which entails a few manual matrix multiplications if the object has multiple rotations/translations) and then calling glMultMatrix to put each object in the MV matrix, why not just manually multiply the whole scene and then load the modelview matrix once?

Hello,

no, that would really soak your performance with zero gain. I take it you’re suggesting computing the modelview matrix for the moon and then iterating over all verticies and multiplying them by the mv matrix and transferring the result to opengl? You wouldn’t get any advantage–what would you do with the data that you’ve stored?–and you’d lose any advantages of building display lists. You’d also trade efficient matrix multiplication in your graphics hardware for CPU matrix multiplication.

cheers,
John

oh, and another thing—just because you’re computing your OWN modelview matrix to compute the observer position doens’t necessarily mean you should throw away opengl’s matrix building operations with glLoadMatrix. You’d really only see performance gains with a very complicated matrix stack.

Furthermore, you probably don’t want to compute the modelview matrix for all geometry since you probablu won’t use the results, so you’d still be better off buidling matricies with opengl rather than computing them yourself and uploading them.

cheers
John

Thanks for all the help, John. Your contributions to this forum are valued. Take care.