Get Location of Vector After Rotation?

So, I’ve been working with OpenGL for almost a year now and have managed to avoid one stupid thing this entire time, admittedly on purpose.
I have an X,Y, and Z value, without rotation and translation.
I rotate and translate with calls to glRotate and glTranslate. (the translation is not an issue)
Now, I need to find out where that vector now sits for the purpose of collision detection.
I have looked around for literally dozens of hours for the solution to this issue, and only ever found pieces of what I need.

I would be more than happy to provide more information on the specifics of what I am trying to figure out, since I am pretty sure this original message will not make much sense. (it was written at 1:30AM, after a 4 hour googling binge)

I am not really sure if this is the correct way to go about doing it, but as all these rotations and translations basically update the modelview matrix what you would need is to first obtain the modelview matrix by something like:


float mvMat[16]
glGetFloatv(GL_MODELVIEW_MATRIX , mvMat);

and then just multiply your vector with the modelview matrix to see its current value.

Thanks! I knew I had to do something like that, but I thought I was only supposed to use part of the matrix…
I’ll give it a shot in a bit and return with results.

So, I did some more research and came up with this:


float pointX=0, pointY=0, pointZ=0, pointW=0;

pointX = (mesh->mVertices[f].x * modelView[0]) + (mesh->mVertices[f].y * modelView[4]) + (mesh->mVertices[f].z * modelView[8]) + (modelView[12]);
pointY = (mesh->mVertices[f].x * modelView[1]) + (mesh->mVertices[f].y * modelView[5]) + (mesh->mVertices[f].z * modelView[9]) + (modelView[13]);
pointZ = (mesh->mVertices[f].x * modelView[2]) + (mesh->mVertices[f].y * modelView[6]) + (mesh->mVertices[f].z * modelView[10]) + (modelView[14]);
pointW = (mesh->mVertices[f].x * modelView[3]) + (mesh->mVertices[f].y * modelView[7]) + (mesh->mVertices[f].z * modelView[11]) + (modelView[15]);

pointX/=pointW;
pointY/=pointW;
pointZ/=pointW;

This provides strange results…I feel like it’s very close to what I should be doing, but not quite.

you only include pointw in your calculations if you are dealing with 4-component vectors (or homogenious).
For direction vectors, the w component is 0, thus eliminating the last part of your calculation. For position vectors, the w component =1; thus it is used in the calcuations (as you have done).
Now, if we are talking GL rotation vector X,Y,Z on the CPU, then there is no W component (or assume W=0). This has the effect that when multiplied by a Matrix, the last column is set to zero (the translation part of the matrix). If the rotation vector had w=1, the effect of a matrix multiplication is that the last column is used in the calculation and the traslation is set.

This is why fixed-function lighting used 0 or 1 to control the position of the light as the position vector (x,y,z, 0|1) is multiplied against the ModelView matrix. If w=1, the light’s position is included in the calculation.

Removing the W component does not help…

There has to be SOME way to get the transformed, scaled and rotated coordinates. I’ve been coding in circles for months!

There has to be SOME way to get the transformed, scaled and rotated coordinates.

In general, the way this normally works is that the physics system decides where things are (collision detection being part of physics) and how they’re oriented. You pass that information along to OpenGL when you render that object. So most people simply have no need to do what you’re talking about.

In any case, you don’t say what space you want these “transformed, scaled and rotated coordinates” in. I’m guessing world-space, which is why simply using GL_MODELVIEW isn’t helping. That matrix transforms to camera space, not world-space.

In that case, what you need to do is stop relying on OpenGL’s matrix functions and do it yourself. You need to build a model-to-world matrix separately from your world-to-camera (which I imagine you build with gluLookAt). Then, when you want to render an object, you push your model-to-world matrix onto the OpenGL stack with glMultMatrix.

I had a feeling I would have to resort to that…thanks for the info.

Got it working! Thanks for the help guys, much appreciated!

That was an impressive piece of mind reading, Lefteris!