Slope scale depth bias in OpenGL 3.2 core

Is it possible to implement slope scale depth bias in OpenGL 3.2 core profile and will it help improve the quality of shadow ?

The thing is I tried to implement shadow map (cascaded) with standard depthmap (GL_DEPTHCOMPONENT32F) as a fall back method when the target machine is not fast enought to use VSM (render 3 split VSM + ping pong blur totally kill the performance).

Since my mesh had many unclosed surface (like leaf,skirt,hair,etc) it not possible to use the render back-face only trick to avoid z fighting.

Also applying too much uniform bias when doing depth comparision will lead to shadow detaching from the ground which I would like to avoid.

Thank in advance

Is it possible to implement slope scale depth bias in OpenGL 3.2 core profile

What is a “slope scale depth bias?”

Is it possible to implement slope scale depth bias in OpenGL 3.2 core profile and will it help improve the quality of shadow ?

OpenGL calls it PolygonOffset. Off the top of my head you do something like


glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(someNumber,someOtherNumber);

I vauglely remember reading that glPolygonOffset(0.4,1.0) is a reasonable pair of numbers to start with.

This is what I am talking about

Quote from “Common Techniques to Improve Shadow Depth Maps” on MSDN website

Slope-Scale Depth Bias

As previously mentioned, self-shadowing can lead to shadow acne. Adding too much bias can result in Peter Panning. Additionally, polygons with steep slopes (relative to the light) suffer more from projective aliasing than polygons with shallow slopes (relative to the light). Because of this, each depth map value may need a different offset depending on the polygon’s slope relative to the light.

Direct3D 10 hardware has the ability to bias a polygon based on its slope with respect to the view direction. This has the effect of applying a large bias to a polygon that is viewed edge-on to the light direction, but not applying any bias to a polygon facing the light directly. Figure 10 illustrates how two neighboring pixels can alternate between shadowed and unshadowed when testing against the same unbiased slope.

Look like it Direct3D 10 only for now

If you wanted this surely you could do it manually in your pixel shader when applying the shadow map? I’ve been considering making my own squint on the technique using the light distance rather than a depth buffer, should be more accurate and linear.

Look like it Direct3D 10 only for now

I posted a response to this thread yesterday, is it showing up on your computer? This feature is not Direct3D 10 only, it has been in OpenGL in every version from 1.0 to 4.1, it just has a different name Polygon Offset vs. Slope Scaled Bias.

Be aware that polygon offset sucks because of a number of factors:[ul][]The spec allows it to be implementation-dependent (clicky) so the same values may give different results on different hardware.[]You may encounter floating point precision problems when moving between 16-bit and 24-bit depth buffers.[*]The depth buffer is non-linear so the same values will give different results at different depths.[/ul]If you can find values that work well for you, then great, but it’s not a general solution and you shouldn’t expect it to be one.

Thank everyone for your answer.

On my HD4670 using “glPolygonOffset(1.0,4096.0)” (taken from NVIDIA cascaded shadowmap source code, doesnt even know what is 4096.0 suppose to mean).

Most of the artifact are gone, except for surface that are dangerously parallel to light view(which I thought those slope scale whatever can autometicly fix).

Still has to be test on my brother GTS250 too, hope the implementation are not much different.

Check this out:

Description, and recommendation to start with 1.1, 4.0. Tweak to taste. There’s also a projection matrix trick that offsets objects ~1 Zbuffer depth unit forward, which gets around PolygonOffset’s Achille’s Heel (if it even bites you). Pointers: