I think that visibility tests is a common task that most 3D applications need to perform. For that purpose, people mainly use Trees, such as BSP and Octrees. My idea is that a program somehow uploads a parametric representation of treenodes (without their contents - polygons), and OpenGL will somehow tell the application what tree nodes are visible. For example, each treenode can be a cube represented by its center, the distance from the center to sides of the cube and a unique ID number that can identify it and is assigned to each treenode by the application. Later, when needed, an application can ask OpenGL to tell it whether a certain treenode is visible, or somehow get a list of visible treenodes.
tell me what do you think about it, whether it is cool/lame/not practical.
But OpenGL does know the projection and the modelview matrix, so it can check whether a certain point is in the field of view. Then why can’t it check whether a “center” of a cube is in the field of view?
the existance glGet(GL_PROJECTION_MATRIX) and glGet(GL_MODELVIEW_MATRIX) leads me to the conclusion that OpenGL does have those matrices stored somewhere. How can it give them to aprogram if it doesn’t have them stored itself?
Another possibility: opengl multiples each vertex by the modelview and projection matrices. How can it multiply the vertices by those matrices if it doesn’t have them stored somewhere?
[This message has been edited by Toxigun (edited 03-16-2002).]
Sure OpenGL knows about the projection and modelview matrix, for the reasons you gave.
The prolem with your idea is what you described belongs to a scene graph API, and not an immediate API like OpenGL. OpenGL is meant to be as generic as possible, and adding specialzed scene graph functions is far from being generic.
But really, this has been discussed before and basically everyone thinks GL is in an excellent low level state. If you want to cull using fancy algorithms, you can have your pick!
Ditto for bitmaps, 3d file formats, OOP (C++), COM, shadows, collision detect, …
>>But OpenGL does know the projection and the modelview matrix, so it can check whether a certain point is in the field of view. Then why can’t it check whether a “center” of a cube is in the field of view?<<
it does anything thats outside the frustum planes will not get drawn, though u will still send the data to the driver to get checked (which is a bad thing)
for further info check the faq (includes source which we all luv)
I don’t really see why this is being treated as a joke. Don’t get me wrong, I don’t really see the need to have this accelerated by OpenGL. But with extensions like HP_occlusion_test and NV_occlusion_query, these can also be considered something better left in a scene graph. But instead of asking “Are the primitives I’ve just sumbited visible with what is already in the depth buffer?” it would be “Are the primitives I just submitted visible after transformation and clipping?” But again, I don’t see the need for it. I mean, you could just as easily have your question answered using data you’ve already given to OpenGL (modelview and projection, view frustum). I think his real point was to have a GPU or whatever do culling for him to offload it from the CPU. As far as the tree idea, well that’s just plain nuts. But again, for the third time, I really see no use for it.
NV_occlusion_query does exactly that. You render bounding volumes of your objects, and can ask OpenGL later whether any pixels of the bounding volumes would have been visible. The problem is that you need to have some occluders drawn before the test makes much sense.
If you are interested in this topic, I suggest you find some papers on occlusion culling (Ned Greene’s hierarchical z-buffer, or Zhang’s hierarchical occlusion maps would be some starting points)…
I dunno if I got it strait but you want OpenGL to “graphicaly” figure out which nots in a BSP tree are visible or not? Eventhough it just needs a few processor instructions to do the very same?
I have missunderstood you post though (I am not a native English speaker)
(I dont think he means the IBM occlusion extension, not sure though…)
I do sortof understand where Toxic is coming from with the modelview/projection matrix. I have done work in the past where I was doing lots of clipping and occlusion detection in world space which was fine except most object’s verts were stored in a local system. So when it cam to doing collision detection and the like even when I was only using the bounding box, I still had to apply the modelview or more specifically the local to world transformation to the bounding box verts. It would sortof be nice if this could be done in hardware. But alas life goes on.
PS. I haven’t tried any of this in a vertex program. My occlusion code is quite old long before the days of vertexprograms and shaders and the like.