help w/ ARB_OCCLUSION

It is my understanding that OpenGL 2.0 now has an ARB extension built in that handles Occlusion culling. I looked it up in the 2.0 spec doc and seen some reference to it, but unfortunately there was nothing that really explained, at least to me, how to implement it. I was wondering if someone can give a “brief” example of how it is used. I understand that you must use a BeginQuery() and an EndQuery() around it. But I do not quite understand what the enum <target> and uint <id> variables are supposed to represent? The document says that <target> is the SAMPLES_PASSED. But what does that mean? I am guessing that the <id> can just be a #defined value that is used to represent a query, but not completely sure on that part either. Then the documentation talks about a GenQueries() function.The functions takes a sizei <n> variable. Not sure what that means, though I am guessing it is just an int variable that represents the number of polygons possibly? And it also takes a uint <*ids> variable. Again, not sure what this is refering to but think it is some type of #defined variable like in the BeginQuery() function. After the documentation talks about the GenQueries() function it then talks about a DeleteQueries() function. When is this needed? How do you interact with these 4 functions?

Here is what I am hoping to accomplish. I have a scene that consists of several objects. What I want to do is create a simple bounding rectangle, as seen from the perspective of the camera, around each object. Then determine the Z depth order of the created bounding rectangles. Starting from the closest, I want to use the closest bounding rectangle as an occluder and do a test to see how much of the object, now the bounding rectangles, is seen and discard any that show a minimal amount of pixels. Then I continue through the Z depth order for the objects continuing the test against the bounding rectangles of the objects deeper within the scene. In a sense, it is a kind of scene graph. So it is these bounding rectangles that I want to pass to the Occlusion functions.

Can anyone give me some help on how to use the Occlusion functions?

Thanks in advance.

First you create a query:
glGenQueriesARB(1, &query);

Then you use it as such:
glBeginQueryARB(GL_SAMPLES_PASSED_ARB, query);
// Draw your stuff in between here …
glEndQueryARB(GL_SAMPLES_PASSED_ARB);

Then you can query the result:
GLuint samples;
glGetQueryObjectuivARB(query, GL_QUERY_RESULT_ARB, &samples);

What you get in “samples” is the number of samples that survived depth test and stencil test. Note that in order to get most performance you may use query GL_QUERY_RESULT_AVAILABLE_ARB to see if the result is available yet and do some other stuff while waiting for the query to complete.

The specs of occlusion query is self explanatory. There’s everything one should know in order to make good occlusion queries. So have a look at them !

Between, this is not a GL 2.0 spec but 1.5 one (or even older I don’t really remember).

Originally posted by jide:
[b]The specs of occlusion query is self explanatory. There’s everything one should know in order to make good occlusion queries. So have a look at them !

Between, this is not a GL 2.0 spec but 1.5 one (or even older I don’t really remember).[/b]
I did read the specs, in the GL 2.0 document as I stated in my post. My questions are from the specs as I am a little confused on the meaning of some of the required variables.

Thanks Humus for the comments. It helps. :slight_smile:

First you create a query:
glGenQueriesARB(1, &query);

But what does query represent? Do you just make a #define query and then pass it into the function? Or is it something else? Sorry if this seems trivial, but sometimes I have a hard time wrapping my brain around something unless it is explained in detail. But I do appreciate your help. :slight_smile:

Then you use it as such:
glBeginQueryARB(GL_SAMPLES_PASSED_ARB, query);
// Draw your stuff in between here …
glEndQueryARB(GL_SAMPLES_PASSED_ARB);

Ohhhhh…Ok now that I understand. :slight_smile: So in my example of wanting to draw a bounding rectangle for my objects, I would draw it here. But then, if I understand this correctly, I wouldn’t need to do multiple passes in this. I would draw ALL of my bounding rectangles, or all of my geometry, here and then it would tell me what was seen? But that doesn’t make sense, how do you differentiate between individual objects? Or is it more of, each individual object is drawn in between each query? But if that is the case, what is used to determine if it is hidden or not?

Then you can query the result:
GLuint samples;
glGetQueryObjectuivARB(query, GL_QUERY_RESULT_ARB, &samples);

I think before I can fully understand the implecations of this part I need to fully understand the part before it. But this seems fairly straight forward. The &samples value would be how many pixels of the object are seen and I could take that number and decide if I want to draw an object or not. That sound right?

Thanks again for your help, and to anyone else that contributes. :smiley:

Did you read this one

http://oss.sgi.com/projects/ogl-sample/registry/NV/occlusion_query.txt

?

Read it and look at the example at the end of the file.

The logic of the last paragraph you wrote seems well.

Hope that helps at least a bit this time.

I think I have figured it out! I was able to find some example code that used the NV extension in the previous versions of OpenGL, and I am guessing that it is basically the same thing, just that it is now a part of OpenGL. Is my understanding correct?

or maybe this is this one:
http://oss.sgi.com/projects/ogl-sample/registry/ARB/occlusion_query.txt

Sorry, I’ve done that too quickly… In fact I never used the NV OQ extension just the ARB one.

As Humus said, OQ happens in two parts.

First, you stipple all the bounding boxes of what you draw and want to pass the OQ test. BeginQuery and EndQuery need to be called for each of the boundings.

Next and when all the previous first step is done, you could check the results of each query. You understood well this part.

There’s more, but that’s suffisant enough to start with them. Then an extra read of the specs will help you do the more.

That must help :slight_smile:

Thanks a lot jide!! I did not know of those documents. They were very helpful and I think I understand now. :slight_smile: Thanks again everyone for your help.

i dont know if using bounding shapes is a good idea.
yesterdays cards can easily do 10s millions tri/sec. 99% of apps dont get close to this maximum. the boundingshape is gonna cover up more pixels onscreen (thus consuming more fillrate than the normal mesh) + perhaps will give the answer its visable when it aint.
a better method is what si also mentioned in those papers when u draw the object eg in a depth pass with query attached, this way u see exactly if it is visable and since if u were drawing the depthpass first u loss nothing(*)

(*)broken on nvidia gffx cards + gives worse performance, hopefully new official drivers will fix it.

i dont know if using bounding shapes is a good idea.
yesterdays cards can easily do 10s millions tri/sec. 99% of apps dont get close to this maximum. the boundingshape is gonna cover up more pixels onscreen (thus consuming more fillrate than the normal mesh) + perhaps will give the answer its visable when it aint.
a better method is what si also mentioned in those papers when u draw the object eg in a depth pass with query attached, this way u see exactly if it is visable and since if u were drawing the depthpass first u loss nothing(*)

The problem with that though is that I have scenes that have several different models that can have millions of triangles per model. I can break the models up into individual objects and plan on passing the bounding cube of each individual object, so that the bounding cubes more closely fit the geometry. The bounding cubes only consist of the normal 12 triangles needed to make a cube. So it seems to me that this will be faster than sending the actual geometry. Then, I will do a query of how many pixels of each cube is seen and discard any that fall below a predetermined number of pixels that is considered insignificant. Then, taking those tags of the objects that pass the query test, the actual geometry is sent down to the raster to be displayed on the screen. Now, this is how I have it all planned out, how I can actually get it implemented may be another story. <lol>