Rendering scene from another viewpoint into depth texture

Hey guys.
I’m trying to get shadow mapping working in my engine.
Now I’m trying to render the scene from different viewpoints (1 per lightsource).

Is there a performant way to do this?

My thaught:
Iterating through all meshes and draw them with a specific depth shader. But I think thats not the best solution.

Thank you in advance for your answer :slight_smile:

There isn’t an alternative to rendering the scene (or the relevant portion of it) for each light source.

The primary optimisation is that for static meshes with a static light source, there’s no point in generating the depth map each frame. And if the light source moves slowly, you can re-generate the depth map every N frames rather than every frame.

Another important optimisation is to avoid unnecessary detail. If a mesh isn’t visible, anything which doesn’t affect its silhouette is irrelevant (e.g. a wall of a building with inset windows could be reduced to a rectangle). If the mesh itself is visible, you need to ensure that self-shadowing is correct (but you may still be able to simplify the mesh, e.g. by removing edges which exist because of texture seams rather than geometry).

If the scene normally requires multiple draw calls because of different shaders or uniforms, that need often doesn’t apply if you only need the depth value. E.g. the only fragment shaders required are a trivial one for opaque meshes and another for meshes with an alpha map. So you may be able to use fewer draw calls than for normal rendering.

If you have complex vertex shaders (e.g. skeletal animation), you can use transform-feedback to capture the results so that the computations don’t need to be performed repeatedly.

Thanks for your reply :slight_smile:

The primary optimisation is that for static meshes with a static light source, there’s no point in generating the depth map each frame. And if the light source moves slowly, you can re-generate the depth map every N frames rather than every frame.

But whats with the scenario when my character is under one of these static lights and moves?

If the scene normally requires multiple draw calls because of different shaders or uniforms, that need often doesn’t apply if you only need the depth value. E.g. the only fragment shaders required are a trivial one for opaque meshes and another for meshes with an alpha map. So you may be able to use fewer draw calls than for normal rendering.

I now have a shaderprogram using only a vertex shader for creating a depth buffer.

If you have complex vertex shaders (e.g. skeletal animation), you can use transform-feedback to capture the results so that the computations don’t need to be performed repeatedly.

I once coded a project with transform feedback but I can’t bring myself to really understand the meaning or usage of it.

Then that’s dynamic geometry, which needs to be rendered each frame. But you don’t need to re-render all of the static geometry along with it. You can have one depth buffer for static geometry and another for dynamic geometry, and use both when calculating the shadow in the actual rendering stage (a fragment is only lit if it passes both depth tests).

So you probably only need one draw call for each light source. If you’re performing broad-phase frustum culling in the client, you can use glMultiDrawElements() to draw an arbitrary subset of the scene for each view.

[QUOTE=Cloudy McStrife;1280629]
I once coded a project with transform feedback but I can’t bring myself to really understand the meaning or usage of it.[/QUOTE]
Transform feedback just captures the outputs from the vertex shader (or the geometry shader, if one is present). If you need to use the same results multiple times (e.g. for different views or passes), this avoids having to repeat (most of) the calculations for each view or pass (for different views, vertex positions still need to be transformed for each view).

If the vertex shader isn’t doing anything besides transforming vertex positions, there’s no advantage to using transform feedback.