Shadow Mapping not working for textures not size of screen

First of all, hello everyone! Just joined this forum :slight_smile:

I am working on a 3D engine (using LWJGL for Java). As rendering tehnique I use deferred rendering, which I managed to make it work. I am now trying to add shadow mapping and I encountered a problem: shadow mapping is working only if the shadow map texture is EXACTLY the same size as the screen.

Here are the steps that I do for rendering and shadow mapping:

  1. Render geometry for g-buffer (and store color, normal and depth)
  2. For each light in the scene (for now):
    • bind the shadowmap
    • draw all the geometry and store depth and depth2 in it (preparing for variance shadow mapping)
    • bind a framebuffer (will use it for post-processing)
    • render each light using the g-buffer and additive blending
  3. Post-process (FXAA)

In the lighting shaders I do as follow:

  1. Reconstruct world position from depth texture (g-buffer one)
  2. Multiply worldPos by light matrix (containing an orthoProj * lightView)
  3. Transform to texture coordinates: (step3Pos.xyz / step3Pos.w) * 0.5 + 0.5
  4. Use step function to see if the pixel is in shadow

As I said, the mapping works, but only if the shadow map is the same size as the screen.

Thank you!

Are you calling glViewport() when binding the shadow map framebuffer?

I am not calling any glMatrix / glColor / glViewport or any type of gl functions.
I am passing projection, view and transform matrices to shaders and do calculations there.
The viewport I guess is still in [-1 … 1] coordinates. :smiley:

If you need any code, tell me :slight_smile:
Thanks!

[QUOTE=marcu.iulian13;1270063]I am not calling any glMatrix / glColor / glViewport or any type of gl functions.
I am passing projection, view and transform matrices to shaders and do calculations there.[/quote]

Well, there’s your problem: one of those functions is not like the other.

glColor and such were deprecated in OpenGL 3.0 and removed in 3.1. glViewport was not; that’s still a thing you need to do. The vertex shader does not replace the viewport transform. So you still need to call that to establish your viewport.

The viewport is not in normalized coordinates. It’s in the space of the image you’re rendering to. The viewport transform defines how your vertices (output from the vertex shader) are mapped to the screen/render target.

So if you want to render to a target that isn’t the exact size of the screen, you need to change the viewport to match the size of the render target, then change it back when you switch FBOs.

Oh, wow. I really didn’t know that :slight_smile: I’ll give it a try and tell you how it was going.

LE: Worked like a charm! Thank you for the quick answer!

Thank you!

To elaborate: a viewport’s bounds are in integer pixel coordinates relative to the lower-left corner of the rendering target (e.g. window, off-screen surface, or FBO). The first time that a context is bound, the viewport’s bounds are set to the bounds of the rendering target (i.e. x and y are set to zero and width and height are set to the width and height of the target). Thereafter, the viewport’s bounds won’t change unless you call e.g. glViewport(); they don’t change automatically when you bind a FBO, or when you bind a context to a different window, or if the window size changes.