Defer Rendering - Framebuffer w/ Renderbuffer (Help Optimizing)

I’m fairly new to Framebuffer/renderbuffers but basically I have deferred rendering working-ish. But my implementation uses the framebuffers in conjunction with a renderbuffer which I’ve had a hard time with.

I currently have it where I draw the meshes into the GBuffer, then send the textures through to do all my lighting which outputs to the offscreen buffer and finally gets blitted to the screen. However this isn’t optimal as positions can be represented as just the depth information? but I haven’t a clue with my current setup (or whether it’s possible?).

Here’s my buffer(s) creation:

void CoolFunc (int width, int height)
{
#pragma region GBuffer
	glBindTexture (GL_TEXTURE_2D, m_gtextures[GBufferID::POSITIONS]);
	glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB32F, width, height, 0, GL_RGB, GL_FLOAT, nullptr);
	glBindTexture (GL_TEXTURE_2D, 0);

	glBindTexture (GL_TEXTURE_2D, m_gtextures[GBufferID::NORMALS]);
	glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB32F, width, height, 0, GL_RGB, GL_FLOAT, nullptr);
	glBindTexture (GL_TEXTURE_2D, 0);

	glBindRenderbuffer (GL_RENDERBUFFER, m_gBufferRBO);
	glRenderbufferStorage (GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, width, height);
	glBindRenderbuffer (GL_RENDERBUFFER, 0);

	glBindFramebuffer (GL_FRAMEBUFFER, m_gBufferFBO);
	glFramebufferRenderbuffer (GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_gBufferRBO);
	glFramebufferTexture2D (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_gtextures[GBufferID::POSITIONS], 0);
	glFramebufferTexture2D (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_gtextures[GBufferID::NORMALS], 0);

	assert (glCheckFramebufferStatus (GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);

	const std::vector<GLenum> gbufferAttachments { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
	glDrawBuffers (gbufferAttachments.size(), &gbufferAttachments[0]);

	glBindFramebuffer (GL_FRAMEBUFFER, 0);
#pragma endregion
#pragma region LBuffer
	glBindTexture (GL_TEXTURE_2D, m_ltextures);
	glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, nullptr);
	glBindTexture (GL_TEXTURE_2D, 0);

	glBindFramebuffer (GL_FRAMEBUFFER, m_lBufferFBO);
	glFramebufferRenderbuffer (GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_gBufferRBO);
	glFramebufferTexture2D (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_ltextures, 0);

	assert (glCheckFramebufferStatus (GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);

	const std::vector<GLenum> lbufferAttachments { GL_COLOR_ATTACHMENT0 };
	glDrawBuffers (lbufferAttachments.size(), &lbufferAttachments[0]);

	glBindFramebuffer (GL_FRAMEBUFFER, 0);
#pragma endregion
}

Pseudo-ish render loop:

void RenderCoolFunc (void)
{
    // All data is sent to shaders (Mostly using UniformBufferObjects)
	
#pragma region Phase 1 - Geometry to GBuffer
	glUseProgram (Program::GEOMETRY);
	glBindFramebuffer (GL_FRAMEBUFFER, m_gBufferFBO);
	// Clear all buffers
	glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

	// Do some pipeline switch enabling i.e. depth and stencil test... etc.

	glBindVertexArray (...);
	glBindBuffer(GL_DRAW_INDIRECT_BUFFER, ...);
	// One beautiful draw call
	glMultiDrawElementsIndirect (...);

	// Safety Unbind
	glBindBuffer(GL_DRAW_INDIRECT_BUFFER, 0);
	glBindVertexArray (0);
	glBindFramebuffer (GL_FRAMEBUFFER, 0);
	glUseProgram (0);
#pragma endregion

#pragma region Phase 2 - GBuffer to LBuffer

	glBindFramebuffer(GL_FRAMEBUFFER, m_lBufferFBO);

#pragma region Pass 1 - Global light
	glUseProgram (Program::GLIGHT);

	// Do More pipeline switch enabling...

	glActiveTexture (GL_TEXTURE0);
	glBindTexture (GL_TEXTURE_2D, m_gtextures[GBufferID::POSITION]);
	glUniform1i (glGetUniformLocation (Program::GLIGHT, "u_position"), 0);

	glActiveTexture (GL_TEXTURE1);
	glBindTexture (GL_TEXTURE_2D, m_gtextures[GBufferID::NORMALS]);
	glUniform1i (glGetUniformLocation (Program::GLIGHT, "u_normal"), 1);

	glBindVertexArray (...);
	glDrawArrays (GL_TRIANGLE_FAN, 0, 4); // Fullscreen Quad

	//// Safety Unbind
	glBindVertexArray (0);
	glBindFramebuffer(GL_FRAMEBUFFER, 0);
	glUseProgram (0);
#pragma endregion
#pragma endregion

#pragma region Final Phase - LBuffer to Screen
	glBindFramebuffer (GL_READ_FRAMEBUFFER, m_lBufferFBO);
	glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);

	glBlitFramebuffer (0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
	glBindFramebuffer (GL_FRAMEBUFFER, 0);
#pragma endregion
}

Any help is greatly appreciate it!
Also would there be anyway to optimize the normals or anything else for that matter?

It’s not necessary to store positions in the G buffer as they can be calculated from gl_FragCoord.xy and the depth. However, you need to be careful about the accuracy of the depth values.

Also, using GL_RGB32F for normals is overkill. Use GL_RGB10_A2, GL_RGB8 or GL_RGB8_SNORM) instead.

I understand that storing the vec3 positions isn’t necessary that’s what I’m hoping to change. But I don’t know how I’d go about doing that with current approach. I’ve tried what I’ve seen most do, at least to the best of my understanding but with very little success.

From what I’ve read, I have to pass the depth as a texture into the shaders and use texture2D (u_depth, gl_FragCoord.xy) which is what you’re saying right?

Also, using GL_RGB32F for normals is overkill. Use GL_RGB10_A2, GL_RGB8 or GL_RGB8_SNORM) instead.

Ah, that is good to know.

GL_RGB8 or GL_RGB8_SNORM

Actually, do not use either of them. 3 channel formats are not required formats for render targets. As such, you can’t rely on implementations to allow you to render to them.

If you don’t like RGB10_A2, you should use RGBA8_SNORM or RGBA8.

[QUOTE=SharkByte;1280763]I understand that storing the vec3 positions isn’t necessary that’s what I’m hoping to change. But I don’t know how I’d go about doing that with current approach. I’ve tried what I’ve seen most do, at least to the best of my understanding but with very little success.

From what I’ve read, I have to pass the depth as a texture into the shaders and use texture2D (u_depth, gl_FragCoord.xy) which is what you’re saying right?[/QUOTE]

Yes, exactly. If you store your depth buffer in a texture instead of a renderbuffer, you can bind that to one of your 2nd-pass shader sampler2D inputs and then pull in the value with texture() (or texture2D, for old GLSL versions). It’ll read in as a 0…1 value (representing depth values from near to far).

Then, you use a little math in the 2nd-pass shader to convert that Z depth value plus XY position into an 3D eye-space position. There are a number of ways to do this. For instance, use this if you are rendering your G-buffer with a perspective projection (the usual case):


vec3 PositionFromDepth_DarkPhoton(in float depth)
{
  vec2 ndc;             // Reconstructed NDC-space position
  vec3 eye;             // Reconstructed EYE-space position
 
  eye.z = near * far / ((depth * (far - near)) - far);
 
  ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
  ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;
 
  eye.x = ( (-ndc.x * eye.z) * (right-left)/(2*near)
            - eye.z * (right+left)/(2*near) );
  eye.y = ( (-ndc.y * eye.z) * (top-bottom)/(2*near)
            - eye.z * (top+bottom)/(2*near) );
 
  return eye;
}

which you can simplify a bit by factoring out -eye.z/(2*near).

And of course, if you are rendering your G-buffer with a “symmetric” perspective projection, the eye.x/.y lines simplify down to:


eye.x = (-ndc.x * eye.z) * right/near;
eye.y = (-ndc.y * eye.z) * top/near;

[FONT=arial]
[FONT=verdana]NOTES:

  • depth is the 0…1 depth value you read in from the depth texture sampler2D via texture()
  • left/right/bottom/top/near/far are the perspective projection inputs (see glFrustum)
  • widthInv and heightInv are 1.0/width and 1.0/height (respectively), where width and height are the dimensions of the viewport you used when rendering the G-Buffer.

[/FONT][/FONT]

Note that when you change your G-buffer depth buffer from a renderbuffer to a texture, you should be able to use your existing GL_DEPTH24_STENCIL8 or any depth buffer format you want (DEPTH_COMPONENT or DEPTH_STENCIL) – subject to what your GPU+driver supports.