New targets for ARB occlusion query

The occlusion query extension is written in such a way as to allow for the specification of different types of queries. I suggest adding bounding region or bounding frustum queries.

Background

At the moment, occlusion queries can only be used to check the number of fragments the are passed by the fragment shader and subsequent fragment tests. This behaviour is defined by the target parameter “SAMPLES_PASSED_ARB”.

The ARB query is preceeded by extensions from HP and NV. The original HP occlusion query only provides feedback on whether or not any fragments passed all fragment tests, the NV occlusion query returns a count of passed fragments, and the ARB query added the possibility of extensiblity to other types of queries.

Suggestion

I would like to see queries that return some form of coordinate range for rendered fragments. For example, the screen-space region or the minimum and maximum eye space x, y, and z coordinates of rendered fragments.

Usage Examples

I’ll describe a few ways in which this could be useful.

Case 1: “Where did the object appear?”. One could render an object and check if a portion of the object appears on an area of the screen (or even within a volume of space).

Case 2: “Optimising shadows”. Much like shadow maps, render an object from the point of view of a light source. Query the range of each coordinate to determine a frustum. Everything outside this frustum is definitely not shadowed by the object.

Case 3: “Optimising portals” (similar to case 2). Render a portal. Query the eye space range of each coordinate to determine a frustum that can be used for conservative object culling and geometry clipping (reducing the amount of data that makes it to the fragment processing stage).

I think that returning the eye space range or frustum is ideal, but the simpler query of the 2D region (for basic scissoring) would also be useful. Either result could be interpreted from the other.

Other notes

Here are some suggestions for new targets: SAMPLES_PASSED_REGION_ARB, SAMPLES_PASSED_FRUSTUM_ARB.

If floating point data is returned (probably for a frustum query, but not for a region query) additional functions will be required such as: void GetQueryObjectfvARB(uint id, enum pname, float *params);

I’m not sure about how amenable this is to hardware implementation. However, it seems to that it would require conditional writes global variables for each fragment that passes all fragment tests.

All 3 cases you mentioned can be implemented using feedback mode.
You simply render all objects in feedback mode and add tokens to separate them in returned data. Unfortunately this would require finding minimum and maximum scren coordinates on CPU, but if you use meshes with very low polygon count or even bounding boxes, then you will have very little data to process on CPU and it will work fast.

Actually it will work much faster than rendering the object and querying minimum and maximum fragment coordinates, since in feedback mode nothing gets rasterized.

Originally posted by k_szczech:
All 3 cases you mentioned can be implemented using feedback mode.
Aside from CPU processing, I think that there are a few limitations to using feedback mode.

Feedback mode won’t (easily) account for occlusion by other objects (stored in the depth buffer). I guess that this isn’t so important for case 2, but I think that it’s fairly important for case 3.

Also, can feedback mode account for changes to vertex and fragment data that occur in the GL? For example, if vertex positions are modified by a vertex shader or if a fragments are discarded for volume rendering.

Yes, you can use vertex shaders in combination with feedback mode, but fragment shaders are not used in this mode - no rendering takes place, only vertex processing.

case 1:
You can use feedback to get object’s location and occlusion query to check if it got occluded.
This will not be as precise information as getting minimum and maximum visible fragment coordinates but knowing exact area that object covered on screen is not very valuable information. You can always split your model into smaller parts and perform such test for each one individually. This will give more precise results.

case 2:
Feedback mode + vertex shader will be just enough for that.

case 3:
This is pretty much like case 1 - you can subdivide portal into a grid of smaller portals. This way you can get good approximation for both frustum and scissor test.

Sure it would be nice to have the possibility to check minimum and maximum fragment coordinates but on the other hand I think that you may eventually end up in some worst case scenario - when cost of processing fragments for occlusion query exceeds what you save in comparison to higher level optimizations.

By the way - I’m not sure about that case 2 - the way I see it, when you get object’s frustum from light source’s point of view, then everything outside this frustum does NOT receive shadow from that object. That means everything outside this frustum WILL receive light and requires rendering, and everything inside this frustum requires checking the shadowmap to determine if it’s in shadow or not, so it requires rendering, too.

Originally posted by k_szczech:
By the way - I’m not sure about that case 2 - the way I see it, when you get object’s frustum from light source’s point of view, then everything outside this frustum does NOT receive shadow from that object. That means everything outside this frustum WILL receive light and requires rendering, and everything inside this frustum requires checking the shadowmap to determine if it’s in shadow or not, so it requires rendering, too.
Oops. That was a typo. :smiley: It should read “definitely not shadowed” (corrected now).

Originally posted by k_szczech:
Yes, you can use vertex shaders in combination with feedback mode, but fragment shaders are not used in this mode - no rendering takes place, only vertex processing.
Thanks for the clarification; I didn’t know how GLSL vertex shaders affect feedback mode. :slight_smile:

Originally posted by k_szczech:
case 1:
You can use feedback to get object’s location and occlusion query to check if it got occluded.
This will not be as precise information as getting minimum and maximum visible fragment coordinates but knowing exact area that object covered on screen is not very valuable information. You can always split your model into smaller parts and perform such test for each one individually. This will give more precise results.

I wonder about the performance and complexity of this approach. You need to send vertex data to the pipeline twice (once for feedback and once for occlusion query, this could be bad if using complex vertex shaders). You may end up making a lot of query calls in a short period of time (which could stall the pipeline?). Also, you would ideally partition the model in screen-space rather than model-space (which could be difficult to do?).

Originally posted by k_szczech:
Sure it would be nice to have the possibility to check minimum and maximum fragment coordinates but on the other hand I think that you may eventually end up in some worst case scenario - when cost of processing fragments for occlusion query exceeds what you save in comparison to higher level optimizations.
That may be, but I think that having this option is desireable. It would be easy-to-use and may not present significant overhead compared to application-level techniques in many cases. Also, it would support rendering techniques that may make use of fragment discard to shape an object.