Camera in Object Space / Octree Traversal

I’m breaking apart a volume using an octree and want to traverse it in far-to-near order for transparency reasons. The problem I discovered is that, because of perspective, knowing the view direction isn’t enough. (For example, consider looking down on three bricks in a row; you have to render the side ones first since the middle one occludes both.)

It would seem that I need to know the camera position in object space (or at least the ray it looks down). Am I right?

Short of inverting the model-view matrix, is there a way to figure out how I should traverse the tree?

Hi,

Well i guess there isn’t any way than sorting your polygon by distance from the camera
computing it realtime might be a downfall for your CPU (depends on the size of your level)
one solution i can think of is what i want to try (for collision detection but its the same) : store your polygons (or maybe rather index of polygons) by order of appearence when moving along each of the 3 axis.
Then, at each frame, you cast a ray from the camera to the center of your octree; it shouldnt intersect with more than 1 side of the octree : knowing that, you can decide which list you have to use, and how to go through it.

I haven’t done it myself yet but i think it should work

good luck

wizzo

The view POSITION is what you need to know. Sort the octree nodes by the closest distance from node to viewer position, and render. Nodes to the sides will be further away than nodes at the same depth in the center.

Right. The crux of my question is how to find where the viewer is given that the MVM has probably been transformed since the camera was created. I wound up just inverting the model-view matrix and using that to transform the camera position into local coordinates.