sRGB and normals

Hoping someone can clarify: I have been following these excellent tutorials http://research.ncl.ac.uk/game/mastersdegree/modulegraphicsforgames/ and am confused over the handling of normals if you set up

glEnable(GL_FRAMEBUFFER_SRGB_EXT);

and intend passing normal values through to a deferred render. As I understand it sRGB format gamma corrects values extracted from Samplers, in which case it would also gamma correct normal values when extracted from Samplers.

Part of the problem is that I have set the color texture of the FBO:

Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_RGBA8, Vars.W, Vars.H, 0, Gl.GL_RGBA, Gl.GL_UNSIGNED_BYTE, null);

,

i.e, without sRGB and yet the

glEnable(GL_FRAMEBUFFER_SRGB_EXT);

effectively gamma corrects irrespective, therefore it will also be gamma correcting normal values irrespective of the fact I have not specified sRGB for the normal FBO.

The solution (inspired by the above link) I am playing with is:

  1. hand 8 bit RGB color/norm Samplers through to the first stage and write values to G buffer with SRGB disabled

  2. in the light pass render light cones and where there is volume calculate the light values (assuming the linear space RGB) and write them to 16 bit FBO with sRGB enabled

  3. render the 16 bit final product to a screen aligned quad

Will this work or have I missed something? Also, is it possible to glEnable/disable around FBO attachments, i.e:

Gl.glEnable (Gl.GL_FRAMEBUFFER_SRGB_EXT);
Gl.glGenTextures(1, out fboc);
			if(fboc<1) Console.WriteLine("Error: GL did not assign fbo color texture");	
			Gl.glActiveTexture(Gl.GL_TEXTURE1);
			Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
			Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_RGBA8, Vars.W, Vars.H, 0, Gl.GL_RGBA, Gl.GL_UNSIGNED_BYTE, null);
			Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR);
			Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR);	
			Gl.glTexParameterf(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_S, Gl.GL_CLAMP_TO_EDGE);
			Gl.glTexParameterf(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_T, Gl.GL_CLAMP_TO_EDGE);
			Gl.glFramebufferTexture2DEXT(Gl.GL_FRAMEBUFFER_EXT, Gl.GL_COLOR_ATTACHMENT1_EXT, Gl.GL_TEXTURE_2D, fboc, 0); 
			Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glDisable (Gl.GL_FRAMEBUFFER_SRGB_EXT);

effectively gamma corrects irrespective

No, it does not. Or if it does, then this is a driver bug and needs to be reported as such.

The GL_FRAMEBUFFER_SRGB enable only matters to buffers that are in the sRGB format.

write them to 16 bit FBO with sRGB enabled

I assume by “16 bit FBO”, you mean 16-bit floats. If so, you can’t create an image with 16-bit floats in the sRGB colorspace. Equally importantly, there’s no reason to do so; just write linear values.

The final output image needs to be sRGB. Intermediate buffers don’t have to be.

OpenGL3.3 core context;
look at the docs page 276:

If pname is FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING , param will
contain the encoding of components of the specified attachment, one of
LINEAR or SRGB for linear or sRGB-encoded components, respectively.
Only color buffer components may be sRGB-encoded; such components
are treated as described in sections 4.1.7 and 4.1.8. For the default frame-
buffer, color encoding is determined by the implementation. For framebuffer
objects, components are sRGB-encoded if the internal format of a color
attachment is one of the color-renderable SRGB formats described in sec-
tion 3.8.17.

Section 3.8.17 tells that those internal formats are only: SRGB, SRGB8 , SRGB_-ALPHA , SRGB8_ALPHA8 , COMPRESSED_SRGB , or COMPRESSED_SRGB_ALPHA .

So, default framebuffer: can be gamma-corrected. Attachments with RGBA8 format in a custom FBO shouldn’t be corrected, attachments with SRGB8_ALPHA formats will be corrected. (when framebuffer_srgb is enabled).
MRT with one SRGB_ALPHA texture and one in RGBA, should get gamma-correction on one and keep it linear for the other.

Thus, what you’re seeing right now, with your RGBA8 normals being gamma-corrected, is probably because you’re using the EXT version, Gl.glFramebufferTexture2DEXT() instead of a core version. Or a driver-bug (not following the spec).

About your idea on avoiding this bug/limitation_of_EXT , create the albedo texture to be SRGB_ALPHA; normals-texture is RGBA8. Draw them with SRGB disabled. Then, render the lights-accumulation to a RGBA16F or RG11_B10F texture (which naturally does NOT have SRGB support), again with SRGB disabled. Finally, render onto the default FBO with a quad, with FRAMEBUFFER_SRGB enabled, or a custom shader that will ensure you’re doing gamma correction (while you keep framebuffer_srgb disabled).

About your idea on avoiding this bug/limitation_of_EXT , create the albedo texture to be SRGB_ALPHA; normals-texture is RGBA8. Draw them with SRGB disabled. Then, render the lights-accumulation to a RGBA16F or RG11_B10F texture (which naturally does NOT have SRGB support), again with SRGB disabled.

That’s a problem. If sRGB is disabled for the write to the SRGB8_ALPHA8 texture, then the values written are not in the sRGB colorspace. However, reading those values later, because the format is sRGB, will assume that the values are in the sRGB colorspace. It will therefore linearize them. That’s bad, unless the value you write was actually in the sRGB colorspace. Somehow.

I pondered whether to edit-clarify my post :).
The textures he’d be sampling to fill that albedo color would be SRGB, but he’ll have to declare them as RGBA8. (blending for decals will be mathematically wrong, but visually not any worse than what everyone’s used to). So anyway, this way he’ll be copying SRGB into SRGB bit-wise. (tex filtering is anyway guaranteed to never be SRGB-correct). And then, when drawing the lights, while sampling from the albedo, the gpu will convert the SRGB color to linear; just in time for his linear-space lighting calculations.

Let me see if I have this right. I have my image program set up to display textures gamma corrected (which means they look washed out if viewed under default settings). If I display them in openGL without gamma correcting them they are exceptionally dark. If I render to the quad and apply

vec4 gamma = vec4(0.454545, 0.454545, 0.454545, 1.0);//1.0/2.2
vec4 final=pow(vec4(col, 1.0), gamma);

they view as they do in the image program with gamma correction settings. This is how they also appear if sRGB is enabled (the previous gamma calculation unused). If I have understood you, enabling sRGB is actually only affecting the default FBO (gl_FragColor) because I have not specified compatible sRGB formats in the G buffer. Which seems to mean that sRGB is basically performing the same gamma calculation encoded above at basically the same juncture of the rendering process (only for free).

So I guess the question is, if I use compatible sRGB formats in the G buffer, then will modifications of the color (light calculations, etc) actually take place in gamma space rather than linear space?

I think we’re all getting rather confused about this stuff.

A color value can be in either linear colorspace or some non-linear colorspace. For simplicity’s sake, we will assume that if it’s not linear, it is in sRGB (which is what virtually every image application deals with).

If a texture uses an sRGB format, what this means is that the color values stored in it are in the sRGB colorspace. Reads from this texture will convert from the sRGB colorspace to linear. So any texture sampling operation will produce linear color values.

Writes to an image that is in the sRGB colorspace will… write the given values directly as you gave them. Unless GL_FRAMEBUFFER_SRGB is currently enabled. If it is, then the values written are assumed to be linear colorspace values, and therefore the values will be converted to the sRGB colorspace. Note that this only works for images that specifically use sRGB image formats; writes other images should not be affected. If they are, then it is a driver bug.

So, the way it’s supposed to work is quite simple. Textures that represent some form of color are in the sRGB colorspace and therefore use an sRGB image format. When you read from them in the first pass of deferred rendering, you get a linear color value. You then write this linear value to the G-buffer as a linear value.

When you do the lighting pass, you read a linear colorspace value. Then you do lighting computations in the linear colorspace (which incidentally is the only colorspace where lighting makes sense). When you write your final color value, you can’t write a linear value because your display is non-linear.

The OpenGL spec does not guarantee sRGB to lRGB conversion happening before filtering, but all recent hardware does it that way. Filtering textures in non-linear color space can give visibly different results.

To avoid quantisation artefacts it’s better to write colors as sRGB8 (or a wider float lRGB format).