Ok, one more time (Space Transforms)

Say I have a hierarchy in my engine where nodes (TransformGroups) could have multiple children (TransformGroups and Shapes) and a single parent (again, a TransformGroup).
Every node has a private member Transform3D (Transformation matrix encapsulating Rotations, translations and Scalings).
Now the question is, how do I transform a 3D point from world position (The top node) to the node (object) space using the relationship that I described?
What I used to do is retrieving the inverse Transform of every node, multiply my world space position with it, and then seek the transform of the parent node (if it exists) and do the same thing until I reach the top node.
This worked great when I moved my objects around or even scaled them, however this implementation turned out to be flawed whenever I introduced rotations to the scene.

void TransformGroup::transformToObjectSpace(Tuple3f *result)
{
  if(parent)
    parent->transformToObjectSpace(result);

  if(!transform.isInverseIdentity())
    *result *= transform.getInverseMatrix4f();
}

void TransformGroup::transformToWorldSpace(Tuple3f *vPTR)
{
  if(parent)
    parent->transformToWorldSpace(vPTR);  

  *vPTR *= transform.getMatrix4f(); 
}

Why are you doing things this way?

Just multiply the matrices on the modelview as you descend from parent to child and draw the child.

The multiplication produces the concatenated matrix that will place the vertices correctly in eye space.

Note that intuitively the viewing transform is the inverse of a model transform i.e. to place the eye where a model is the eye matrix for that orientation and location has to cancel the model matrix to produce identity.

[b]Why are you doing things this way?

Just multiply the matrices on the modelview as you descend from parent to child and draw the child.[/b]

I already do that to draw my nodes, but my point is, instead of transforming say 10000 vertex to world space, it’s a lot quicker to transform a single one (say light position) to object space and go from there.

The multiplication produces the concatenated matrix that will place the vertices correctly in eye space.

Already knew that.

Note that intuitively the viewing transform is the inverse of a model transform i.e. to place the eye where a model is the eye matrix for that orientation and location has to cancel the model matrix to produce identity.

Could you elaborate a litte on this one?

It might help to look at the order of your matrices carefully. Recall that the inverse of a product is the reverse product of inverses (for non-singular matrices).