PDA

View Full Version : Culling in OpenGL



posixoptions
05-17-2010, 06:58 PM
I was just wondering what kind of culling OpenGL can do for me to help with performance related issues?

I'm aware I can do culling myself on the object level; however I have only two objects and they are both complex enough to put a severe performance penalty -- so when the objects are in view, performance is terrible.

I already have GL_CULL_FACE enabled, but is there anything else I can make OpenGL do? Is there some easily implementable manner of making OpenGL cull/prune away polygons that are not visible, e.g. polygons that are behind other polygons which are not needed to be drawn at the given time?

Thanks in advance! :-)

Alfonse Reinheart
05-17-2010, 08:32 PM
is there anything else I can make OpenGL do? Is there some easily implementable manner of making OpenGL cull/prune away polygons that are not visible, e.g. polygons that are behind other polygons which are not needed to be drawn at the given time?

Nothing easily implementable. Sorry.

CortS
05-17-2010, 09:17 PM
You may want to look into occlusion queries (ARB_occlusion_query (http://www.opengl.org/registry/specs/ARB/occlusion_query.txt)). They're not trivial to implement correctly -- there's no glDrawStuffFaster() -- but when used correctly, they do exactly what you're looking for.

charliejay
05-18-2010, 12:56 AM
What's your overdraw? If it's sufficiently high, any kind of sort of your polygons that results in a mostly front to back drawing order might help...

Dark Photon
05-18-2010, 05:42 AM
What's your overdraw? If it's sufficiently high, any kind of sort of your polygons that results in a mostly front to back drawing order might help...
And on those last two note, to make maximum use of the GPU's ability to "pre-reject" triangles and fragments early (aka ZCULL and EarlyZ):
Clear the depth+stencil buffer Don't change the depth value in your fragment shader Don't change the direction of the depth test while writing depth Don't enable stencil writes when doing stencil testing Don't render to a 2D texture array (??) Write depth buffer with same test direction as is used for testing Don't render a lot of little features Don't allocate too many depth buffers Don't use 32F depth buffers Don't reference gl_FragCoord.z in your fragment shader Don't enable depth or stencil writes or enable occlusion queries AND - Use alpha test, or - Call discard, or - Use alpha-to-coverage, or - Use a SAMPLE_MASK != 0xFFFFFFFF If you can, try to render polygons in generally a roughly front-to-back order.
(Blatently ripped from the NVidia GPU programming guide (http://developer.nvidia.com/object/gpu_programming_guide.html).)

Also, if your fragment shading is "expensive", then consider doing a "depth pre-pass" to set the depth buffer only, then rerender for shading with a DepthFunc of EQUAL. That way you don't pay to shade any fragments that you can't see. This first pass can also be "double-speed" if you follow the rules, so it's not as expensive as you might think.