cascading shadow maps

I want to implement this in opengl. I am familiar with the algorithm, but am trying to figure out which types of textures to use.

Should I render from the lights point of view into a render buffer or just a texture attached as GL_DEPTH_ATTACHMENT? Can I use a render buffer as a texture to sample from on the lighting pass? This is basically what you do in directX.

Is the Nvidia sample (Dimitrov’s code) a good way to go? People have complained about the slowness of glFrameBufferTextureLayer in the render loop, but if we pre-bind the textures to different framebuffers before the render loop should I use the approach there?

Any samples that use shaders, not the fixed pipeline version?

If the previous post is a bit of a mess, a simpler question:

What is the easiest way to render depth info to a texture/buffer/whatever and then turn around to sample that depth ‘texture’ in a shader.

I am assuming that I will be rendering the depth info using a framebuffer.

Would like something non card specific, something standard!!
Thanks!

Create depth renderable texture (GL_DEPTH_COMPONENT internalformat) and render to it. Renderbuffers aren’t useful anymore.

I will be working on depth shadow mapping sample tomorrow.

Create depth renderable texture (GL_DEPTH_COMPONENT internalformat) and render to it. Renderbuffers aren’t useful anymore.

I will be working on depth shadow mapping sample tomorrow.

Renderbuffers aren’t useful anymore.

Of course they are. They give OpenGL important information: you do not intend to use this buffer as a texture, so it can adjust the format as needed for a rendering surface.

If you need to texture with something, make a texture. If all you need is a rendering surface, make a renderbuffer.

So what’s useful in it? What you are saying is that renderbuffers are useful for OpenGL driver not for OpenGL programmer…

Texture. You want to feed the results into a subsequent pass in a shader sampler (texture unit) for random-access lookups, so you want to use a texture for this.

Renderbuffers cannot be read directly in a subsequent pass (…AFAIK).

Is the Nvidia sample (Dimitrov’s code) a good way to go?

For basic GLSL syntax, sure. For behavior, don’t know. I wasn’t very impressed with this paper, so didn’t pay much attention to the source snippet. I’d recommend other sources such as ShaderX4-7 and GPU Gems 3 for behavior. Then just code it up to suit you.

People have complained about the slowness of glFrameBufferTextureLayer in the render loop,

Where are people complaining that? I’d like to read those posts. I personally don’t see this slowdown. It would be useful (and insightful I suspect) to identify what GPUs and driver versions they are running. It could also be they are doing neive things like changing the resolution and/or format that they are rendering to with the FBO, which is expensive.

…but if we pre-bind the textures to different framebuffers before the render loop should I use the approach there?

Based on my testing so far, I wouldn’t sweat it. Just create one FBO for your CSM split rendering and don’t use it for anything else.

If you are going to use a hardware depth texture, start with GL_DEPTH_COMPONENT24, since that’s what GPUs have been doing well for ages. Per NVidia docs, avoid 32F as it can hurt ZCULL.

Any samples that use shaders, not the fixed pipeline version?

Feel free to ask any questions you might have. The adaptation is pretty straightforward.

There’s are some GLSL CSM shaders on the ShaderX7 DVD. They don’t use hardware depth compare, as hardware depth textures are just a first stopping-off point.

Good terms to google for for basic depth-texture shadow mapping in GLSL with hardware depth compare enabled: sampler2DShadow, sampler2DRectShadow, sampler2DArrayShadow. Of course, if your not using hardware depth compare (e.g. better filtering), then you’d just use sampler2D, samler2DRect, sampler2DArray, etc. Basically, if using hardware depth texture AND hardware depth compare enabled, use a sampler…Shadow. Otherwise use a sampler… (non-Shadow).

Framebuffer object, yes.

http://www.opengl.org/wiki/Framebuffer_Object
http://www.opengl.org/wiki/GL_EXT_framebuffer_object

Thanks a lot, everyone. That clarifies things a lot.

D. Photon, the glFramebufferTextureLayer is slow thread is here:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=276074&page=1

How are the ‘Shadow’ family of samplers different from regular texture samplers? Do you have to use them on depth textures, or do they provide some automated filtering? Confused about this…

Thanks

Has anyone else here read the Frank Luna DirectX book (Intro to 3D game programming with Direct3D).
I was just thinking that Opengl is screaming for a book like this. Some of the ones out there are good, but Luna’s book laid down some basic techniques in about the clearest presentation I’ve seen…!

Thanks. He was rendering to a 3D (volume) texture, not a 2D texture array. That could be it. Also may be a GPU/driver version thing.

How are the ‘Shadow’ family of samplers different from regular texture samplers? Do you have to use them on depth textures, or do they provide some automated filtering? Confused about this…

Use a sampler…Shadow IFF:

  1. [li]You’re providing a depth texture (GL_DEPTH_COMPONENT), AND[*]You’ve enabled hardware depth comparisions (GL_TEXTURE_COMPARE_MODE)

The implication is correct. You can use a non-Shadow sampler to fetch values from a depth texture without the depth comparison, IIRC.

I think I am starting to get it, thanks.

I’m using the approach Dimitrov’s code, but I am binding the layers to framebuffers in advance.

Does this sound like a good approach, or should I ditch the 3D textures all together for speed?

I hadn’t thought of it before, but when tapping a 2D texture you do bilinear interpolation from four close points, does a 3d texture tap 8 points in a hexahedron surrounding your point? In which case using a 3D texture for its 2D layers would be a really inefficient approach?
(If using GL_LINEAR. Does using GL_NEAREST on the 3rd dimension alleviate this possible situation? But its not possible to set the interpolation type per dimension, right?)

I am very confused :stuck_out_tongue:

Is using GL_TEXTURE_2D_ARRAY_EXT a better way to go?

The other option is to manually create several 2D textures, but the code is a bit uglier…

*** Just noticed in the Dmitrov code that he is using TEXTURE_2D_ARRAY_EXT!

Thanks

Try both and see. If no difference, then if you don’t need a 3D texture, I wouldn’t use one. Because…

I hadn’t thought of it before, but when tapping a 2D texture you do bilinear interpolation from four close points, does a 3d texture tap 8 points in a hexahedron surrounding your point? In which case using a 3D texture for its 2D layers would be a really inefficient approach?

Very likely so, if 3D depth lookups/comparisons is even defined behavior.

(If using GL_LINEAR. Does using GL_NEAREST on the 3rd dimension alleviate this possible situation? But its not possible to set the interpolation type per dimension, right?)

AFAIK, no.

I am very confused :stuck_out_tongue:

No I think you’ve got it. Just a few little points to clear up.

Is using GL_TEXTURE_2D_ARRAY_EXT a better way to go?

That’s closer to what you need. So generally I’d use what you need unless you can find a compelling perf/support reason to use something else.

For example if you have to support really old cards (no texture arrays), then you may want to give 2D texture atlases a look or…

The other option is to manually create several 2D textures, but the code is a bit uglier…

Yeah. And you have to eat up N samplers in your shader, where N is the number of splits. So it’s harder to write a generic shadow application shader that “just works” with a startup-defined number of splits.
[/QUOTE]

Thanks Dark Photon. Very helpful!

I am getting a signal now!

Now to try and figure out the light view matrix given that I’m coming from a left handed coordinate world…

Yeah, got it! Still have to work on the bias and the filtering.
Next step is to try harder shadow maps for point, cylinder and hemisphere lights.
Also have normal maps + bloom working. Couldn’t have done it without the help on the forum!

Results:

<object width=“425” height=“350”> <param name=“movie” value=“- YouTube”></param> <param name=“wmode” value=“transparent”></param> <embed src=“- YouTube” type=“application/x-shockwave-flash” wmode=“transparent” width=“425” height=“350”> </embed></object>

Nice!

Thanks!
And again. Got deferred rendering going. Love it. Conceptually very simplimpfying.
Was able to spot, point and directional shadow map with a single base class and unified abstraction!
<object width=“425” height=“350”> <param name=“movie” value=“- YouTube”></param> <param name=“wmode” value=“transparent”></param> <embed src=“- YouTube” type=“application/x-shockwave-flash” wmode=“transparent” width=“425” height=“350”> </embed></object>