dabeav

02-24-2011, 03:16 PM

I have created a "Node" based system of rendering my objects. Each parent node pushes its translation / rotation matrix, renders itself, and then calls each sub node to do the same, after a leaf and all its children has been rendered it pops its matrix off the static, and then we continue on. No problems here, and everything renders as expected. Also I am not having a stack overflow or anything like that, only about 4 push/pops deep so far.

Since all my scene objects start from the origin I have created a collision detection routine that simply grabs the view matrix for each level of the node tree using glGetFloatv(GL_MODELVIEW_MATRIX, modelView) and then I transform the point (0,0,0) using the matrix to determine where the object center is in view space. This is where I seem to be having a problem.

If I have NO object rotations I get the proper world space coordinates, and everything works fine. However, as soon as I add rotations to an object (simply using glRotatef() I start getting odd results when I transform my origin point. Most of the time it returns the exact position I would get if there were no rotations involved, but if I add more rotations further up the tree I start getting a point that falls on a linear line roughly at 45 degrees to the origin.

Yet like I said, if I render an object with the same matrix it gets placed correctly..... Now I am not a matrix master, however as far as I understand it for any given object it has only 1 view matrix. There may be many view matrices on the stack, but they are all multiplied together to form the "current" view matrix. Thus only transforming my origin by the "current" view matrix should position my point where the object will be rendered.... Is that correct? Any ideas on what I am doing wrong?

Since all my scene objects start from the origin I have created a collision detection routine that simply grabs the view matrix for each level of the node tree using glGetFloatv(GL_MODELVIEW_MATRIX, modelView) and then I transform the point (0,0,0) using the matrix to determine where the object center is in view space. This is where I seem to be having a problem.

If I have NO object rotations I get the proper world space coordinates, and everything works fine. However, as soon as I add rotations to an object (simply using glRotatef() I start getting odd results when I transform my origin point. Most of the time it returns the exact position I would get if there were no rotations involved, but if I add more rotations further up the tree I start getting a point that falls on a linear line roughly at 45 degrees to the origin.

Yet like I said, if I render an object with the same matrix it gets placed correctly..... Now I am not a matrix master, however as far as I understand it for any given object it has only 1 view matrix. There may be many view matrices on the stack, but they are all multiplied together to form the "current" view matrix. Thus only transforming my origin by the "current" view matrix should position my point where the object will be rendered.... Is that correct? Any ideas on what I am doing wrong?