Deferred Rendering Z Fighting

Hello all,

I was working on my renderer, which includes a deferred rendering option for lights using MRT. But unfortunately it’s not working and producing a grainy image, and I’ve narrowed it down to the images for both the positions and normals.

The albedo image seems to come through unscathed. I could share some code if that’d help, but I’m really looking for ideas of how this could happen. I hate to blame my drivers at first blush. :dejection:

The position image displays this:
[ATTACH=CONFIG]1648[/ATTACH]

Thanks,
Devin

[QUOTE=devmane144;1289894]Hello all,

I was working on my renderer, which includes a deferred rendering option for lights using MRT. But unfortunately it’s not working and producing a grainy image, and I’ve narrowed it down to the images for both the positions and normals.

The albedo image seems to come through unscathed. I could share some code if that’d help, but I’m really looking for ideas of how this could happen. I hate to blame my drivers at first blush. :dejection:

The position image displays this:
[ATTACH=CONFIG]2589[/ATTACH]

Thanks,
Devin[/QUOTE]

Could be many things.
But at first glance, it looks like you have some coordinate systems issues (some kind of origin look to be at the middle bottom of the house).
So, ensure all the data are stored in the same coordinate system, or ensure that all the calculation happen in the same coordinate system.

And ensure that it’s the same calculations as well.

For example, a common optimization might be to precalculate the MVP matrix on the CPU. If some passes use this precalculated MVP whereas other passes do the MVP calculation themselves in their shader, you might have small floating point differences between what the CPU calculates and what the GPU calculates, resulting in this kind of Z-fighting.

You’re correct; I’m doing some calculation on the CPU. So I did a small tweak to make sure the VP matrix doesn’t change for each drawn model per frame.

The results in the picture I posted last are actually exactly what I expected, minus the fuzz of course. I am certain that the problem is not in the steps after the G-Buffer generation.

The VP matrix and the model matrix go separately into the shader. This generates the images representing albedo, position, and normals, and those shaders look like this:


//VSHADER
#version 330 core

layout(location = 0) in vec3 verPos;
layout(location = 1) in vec2 UV;
layout(location = 2) in vec3 normal;

uniform mat4 modelMatrix;
uniform mat4 VP;

out vec3 worldPos;
out vec2 f_UV;
out vec3 worldNorm;

void main(){
    worldPos = (modelMatrix * vec4(verPos, 1.0)).xyz;
    f_UV = UV;
    worldNorm = (modelMatrix * vec4(normal, 0.0)).xyz;
    gl_Position = VP * modelMatrix * vec4(verPos, 1.0);
}


//FSHADER
#version 330 core

in vec3 worldPos;
in vec2 f_UV;
in vec3 worldNorm;

uniform sampler2D TextSamp2D;

layout (location = 0) out vec4 gAlbedoSpec;
layout (location = 1) out vec3 gPos;
layout (location = 2) out vec3 gNorm;

void main(){
    gPos = worldPos.xyz;
    gNorm = normalize(worldNorm);
    gAlbedoSpec.xyz = texture(TextSamp2D, f_UV).rgb;
    gAlbedoSpec.w = 1.0;
}

I can post more code if it would help, but the process looks roughly like this:

The G-Buffer is made like this.

  1. Create one RGBA texture for Albedo
  2. Create 2 RGB32F textures for the positions and normals
    a. I made the above RGBA to test.
    b. All of the above have GL_LINEAR and S+T are clamped to edge.
  3. Create a renderbuffer for the depth values.

On Loop

  1. Bind the G-Buffer framebuffer
  2. Clear the Color and Depth buffers
  3. Draw to G-Buffer
    a. I am certain that the problem comes from this step or earlier.
  4. Use G-Buffer to draw proper models

I should also mention that the albedo draws the way I expect it to.

[ATTACH=CONFIG]1649[/ATTACH]

Hello all,

I figured it out! The actual problem was that the fragment shader that wrote to the MRT was outputting vec3 for the 2 textures that were going haywire. I posted the working shader below. Apparently if you don’t define the alpha component it won’t write correctly, even if I purposely removed the alpha component from the written texture. I’m sure that’s written down somewhere, but I wasn’t aware of it! :smiley:

Thank you everybody for all of the suggestions!


//FSHADER
#version 330 core
 
in vec3 worldPos;
in vec2 f_UV;
in vec3 worldNorm;
 
uniform sampler2D TextSamp2D;
 
layout (location = 0) out vec4 gAlbedoSpec;
layout (location = 1) out vec4 gPos;
layout (location = 2) out vec4 gNorm;
 
void main(){
    gPos = vec4(worldPos.xyz, 1.0);
    gNorm = vec4(normalize(worldNorm), 1.0);
    gAlbedoSpec.xyz = texture(TextSamp2D, f_UV).rgb;
    gAlbedoSpec.w = 1.0;
}

Do you have GL_BLEND enabled? Try disabling it. (I’ve hit that before myself with MRT rasterization and deferred)

Also for testing, disable GL_ALPHA_TEST, GL_SAMPLE_ALPHA_TO_COVERAGE, and in general anything which might make use of the fragment alpha value during fragment or pixel operations in the pipeline.

If you establish that this is related to the problem you were seeing, note that you can control whether GL_BLEND is on and off on a per-render-target basis, as well as which blend function is active for each render target.

I disabled GL_ALPHA_TEST, GL_SAMPLE_ALPHA_TO_COVERAGE, GL_BLEND using glDisable, and GL_BLEND using the glDisablei for different render targets to try to get the weirdness to go away. Disabling GL_BLEND using glDisable did seem to make the weird artifacts go away, but the unfortunate side problem is that I want blending for how I draw skyboxes.

The only thing that’s a mystery to me is that the GL_RGB16 textures require an alpha output to the shaders? Maybe I did the glDisablei function wrong. I tried glDisablei(GL_BLEND, “number for color attachment here”) to disable alpha for that render target. Is that about right?

Edit: I can words good.

If blending is enabled for a particular colour buffer, then:

  • If the blending function uses source alpha (GL_SRC_ALPHA or GL_ONE_MINUS_SRC_ALPHA), the corresponding fragment shader output must be a vec4 and the fragment shader must assign to the alpha component.
  • If the blending function uses destination alpha (GL_DST_ALPHA or GL_ONE_MINUS_DST_ALPHA), the colour buffer should have an alpha channel.

glEnable/glDisable and glBlendFunc should set the state for all buffers, but it’s not beyond the bounds of possibility that a driver bug causes them to only affect buffer zero; use the indexed versions (glEnablei, glDisablei, glBlendFunci) to be safe.

Those rules make total sense to me; thank you for sharing!

I’ll leave my blending alone for now then, and I’ll leave in the w components for the shader.

That’s interesting though about the state can change between buffers. I might use that for something sometime. I always just assumed that you had to basically glEnable/Disable per the render context, not per the current buffers.