An idea: Hardware accelerated tree-based visibility test

I think that visibility tests is a common task that most 3D applications need to perform. For that purpose, people mainly use Trees, such as BSP and Octrees. My idea is that a program somehow uploads a parametric representation of treenodes (without their contents - polygons), and OpenGL will somehow tell the application what tree nodes are visible. For example, each treenode can be a cube represented by its center, the distance from the center to sides of the cube and a unique ID number that can identify it and is assigned to each treenode by the application. Later, when needed, an application can ask OpenGL to tell it whether a certain treenode is visible, or somehow get a list of visible treenodes.

tell me what do you think about it, whether it is cool/lame/not practical.

Lame.

OpenGL draws stuff. Exactly what you want to draw is your problem.

OpenGL doesn’t know what a bounding volume is, so why bother?

But OpenGL does know the projection and the modelview matrix, so it can check whether a certain point is in the field of view. Then why can’t it check whether a “center” of a cube is in the field of view?

You are joking, right?
This is a joke, yes?

But OpenGL does know the projection and the modelview matrix

YOU know the projection and modelview matrix, surely?

the existance glGet(GL_PROJECTION_MATRIX) and glGet(GL_MODELVIEW_MATRIX) leads me to the conclusion that OpenGL does have those matrices stored somewhere. How can it give them to aprogram if it doesn’t have them stored itself?

Another possibility: opengl multiples each vertex by the modelview and projection matrices. How can it multiply the vertices by those matrices if it doesn’t have them stored somewhere?

[This message has been edited by Toxigun (edited 03-16-2002).]

Damnit! I over slept again… I didn’t know it was April 1st!

Sure OpenGL knows about the projection and modelview matrix, for the reasons you gave.

The prolem with your idea is what you described belongs to a scene graph API, and not an immediate API like OpenGL. OpenGL is meant to be as generic as possible, and adding specialzed scene graph functions is far from being generic.

It’s not the 1st of april, I wish it was cause then I’d have been paid (skint at the moment)

Please go to the beginners forum, toxicshock, or whatever your name is.

*** feel the wrath of the advanced forum ‘specialists’! ***

[This message has been edited by knackered (edited 03-16-2002).]

You are using these functions right???

glBeginScene();

glEnsScene();

But really, this has been discussed before and basically everyone thinks GL is in an excellent low level state. If you want to cull using fancy algorithms, you can have your pick!

Ditto for bitmaps, 3d file formats, OOP (C++), COM, shadows, collision detect, …

V-man

>>But OpenGL does know the projection and the modelview matrix, so it can check whether a certain point is in the field of view. Then why can’t it check whether a “center” of a cube is in the field of view?<<

it does anything thats outside the frustum planes will not get drawn, though u will still send the data to the driver to get checked (which is a bad thing)
for further info check the faq (includes source which we all luv)

Give the man a scene graph: http://www.sgi.com/software/inventor/

I don’t really see why this is being treated as a joke. Don’t get me wrong, I don’t really see the need to have this accelerated by OpenGL. But with extensions like HP_occlusion_test and NV_occlusion_query, these can also be considered something better left in a scene graph. But instead of asking “Are the primitives I’ve just sumbited visible with what is already in the depth buffer?” it would be “Are the primitives I just submitted visible after transformation and clipping?” But again, I don’t see the need for it. I mean, you could just as easily have your question answered using data you’ve already given to OpenGL (modelview and projection, view frustum). I think his real point was to have a GPU or whatever do culling for him to offload it from the CPU. As far as the tree idea, well that’s just plain nuts. But again, for the third time, I really see no use for it.

NV_occlusion_query does exactly that. You render bounding volumes of your objects, and can ask OpenGL later whether any pixels of the bounding volumes would have been visible. The problem is that you need to have some occluders drawn before the test makes much sense.

If you are interested in this topic, I suggest you find some papers on occlusion culling (Ned Greene’s hierarchical z-buffer, or Zhang’s hierarchical occlusion maps would be some starting points)…

Michael

I suspect you’re all reading too much into what the original poster was saying.

I dunno if I got it strait but you want OpenGL to “graphicaly” figure out which nots in a BSP tree are visible or not? Eventhough it just needs a few processor instructions to do the very same?

I have missunderstood you post though (I am not a native English speaker)

(I dont think he means the IBM occlusion extension, not sure though…)

Great!

I hereby request the following extensions:
GL_ARB_a_better_malloc
GL_ARB_linked_list
GL_ARB_tree
GL_ARB_bsp_tree
GL_ARB_quadtree
GL_ARB_octree
GL_ARB_capped_icosahedron_tree
GL_ARB_potentially_visible_set
GL_ARB_bounding_volume
GL_ARB_bounding_volume_tetraeder
GL_ARB_bounding_volume_cone
GL_ARB_bounding_volume_twelve_sided_ADnD_dice
GL_ARB_why_do_I_have_to_write_my_own_code
GL_ARB_quake3_engine_out_of_the_box_here_you_are
GL_ARB_menu
GL_ARB_listbox
GL_ARB_rich_edit_control
GL_ARB_mouse_event
WGL_ARB_file_io_and_positional_sound_with_mp3_support
WGL_ARB_make_windows_run_faster

You forgot the main one:

WGL_ARB_disable_blue_screen

Y.

GL_ARB_nonsense

glDisable(GL_NONSENSE_ARB);

GL_ARB_ontopic

glEnable(GL_ONTOPIC_ARB);

Sounds like the return of the long forgotten Direct3d Retained Mode…

I do sortof understand where Toxic is coming from with the modelview/projection matrix. I have done work in the past where I was doing lots of clipping and occlusion detection in world space which was fine except most object’s verts were stored in a local system. So when it cam to doing collision detection and the like even when I was only using the bounding box, I still had to apply the modelview or more specifically the local to world transformation to the bounding box verts. It would sortof be nice if this could be done in hardware. But alas life goes on.

PS. I haven’t tried any of this in a vertex program. My occlusion code is quite old long before the days of vertexprograms and shaders and the like.