Need some advice about cascaded shadowmap

The only complete cascaded shadowmap sample I can find on internet is the one present in NVIDIA OpenGL SDK10 , but unfortunately It failed to correctly render on my ATI HD4670 but give corrent result on my GF9800/GTS250

Running the program on HD4670 will give a lot of shader compilation error and the result look like application only render the first slice of shadowmap.

The code itself is very complex (for me) and involed using FBO with texture Array.

They also use glFramebufferTextureLayerEXT(GL_FRAMEBUFFER_EXT,
GL_DEPTH_ATTACHMENT_EXT, depth_tex_ar, 0, i) which is the part where I dont understand.

Is the above function (non EXT version) even exist in GL 3.2 core spec ?.

Can somebody give me an example (OpenGL 3.2 way) of how to set up FBO with texture array of depth component. and selecting which layer of depth to render to ?

If the texture array / layer rendering method is not possible on ATIHD4670 please give me some alternative approach.

Thank in advance.

The most common way to layout cascades is in a texture-atlas. Have a look at this presentation:

http://www.slideshare.net/repii/02-g-d-c09-shadow-and-decals-frostbite-final3flat

Also, just for getting to grips with FBOs, see some of the presentations from AMD and Nvidia, e.g.

http://developer.amd.com/media/gpu_assets/FramebufferObjects.pdf
http://download.nvidia.com/developer/presentations/2005/GDC/OpenGL_Day/OpenGL_FrameBuffer_Object.pdf

http://developer.download.nvidia.com/SDK/9.5/Samples/DEMOS/OpenGL/simple_framebuffer_object.zip

Hope this helps.

Thank for your answer.

I already know how to use FBO (already using it for standard shadowmap,deferred shading ,post processign effect) but cant find an example about using it with array texture.

using texture-atlas sound like a great idea but I would like to start with something simple first.

Yeah, well the cons to using atlases (vs. texture arrays) is you have to be careful about cross-map split filtering, your split maps obviously can’t be as big as they can with texture arrays because you’re jam packing multiple split maps into one texture, and you’d have to reduce your split map resolution as you increase the number of splits with atlases just so all the maps would fit.

Rendering Cascaded Shadow Map splits into slices (layers) of a 2D texture array is really pretty simple (the concept is one texture that’s just a stack of 2D textures).

Allocate one like this:

  glTexImage3D( GL_TEXTURE_2D_ARRAY, 0, GL_DEPTH_COMPONENT24, 
                w,h,num_layers,
                GL_FALSE, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0 );

Target FBO rendering to one specific layer of the texture like this:

  glFramebufferTextureLayer( GL_FRAMEBUFFER,
                             GL_DEPTH_ATTACHMENT,
                             tex_handle, 0, layer );

And assuming you’ve enabled hardware shadow comparisons on this texture, access the texture array in a GLSL shader using:

  sampler2DArrayShadow shadow_map;

  float visibility = shadow2DArray( shadow_map, texcoord.xyzw ).r

where texcoord.xyz is the light window-space coordinate and .w is the texture layer (i.e. CSM split). Of course, there are lots of variations (other depth texture formats, using non-depth-texture formats and doing your own shadow comparision, shadow map filtering, etc. – just trying to give you the most basic form).

If you have any other specific questions, just ask!

I know this thread is old but I stumbled over a line in the spec of glTexImage3D that says:

GL_INVALID_ENUM is generated if format is not an accepted format constant. Format constants other than GL_STENCIL_INDEX and GL_DEPTH_COMPONENT are accepted.

So is

glTexImage3D( GL_TEXTURE_2D_ARRAY, 0, GL_DEPTH_COMPONENT24,
    w,h,num_layers, GL_FALSE,
    GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0 );

potentially wrong?

I know this thread is old but I stumbled over a line in the spec of glTexImage3D that says:

First, you could have made a new thread for this.

Second, what spec contains that line? I checked the core 4.2 specification, and doesn’t say that.

No, that should be perfectly legal. You can have 2D arrays or cube arrays of depth maps.

www.opengl.org/sdk/docs/man4/xhtml/glTexImage3D.xml
Look at the second point in the errors section.

I implemented this yesterday and it works, maybe I dont understand the spec?

/Edit:
The spec also says:
“GL_INVALID_OPERATION is generated if format or internalFormat is GL_DEPTH_COMPONENT, GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32.”

The man page seems to be wrong but there is nothing like that you’ve mentioned in the GL spec. First of all, the spec never uses the GL_ prefix so I don’t know from where did you got that.

I copied this section from this page: www.opengl.org/sdk/docs/man4/xhtml/glTexImage3D.xml
Isnt this the official spec?

Isnt this the official spec?

No. That’s is the Man-page. It’s just documentation; the official OpenGL Specification is here.

Anything that calls itself a “specification” will not just be a list of functions and what they do.

No it is not, it’s just a man page that seems to contain some mistakes.

Here are the specs: http://www.opengl.org/registry/
The latest core spec is this: http://www.opengl.org/registry/doc/glspec42.core.20110822.pdf