OpenGL and sRGB confusion

I would like to ask a clarification about the sRGB color space and its 8bit per channel representation in OpenGL. After digging into the subject, I found out that storing images in sRGB in the back buffer is just made to compensate the opposite gamma operation done by the monitor which “imitates” what the old CTR did by nature for compatibility reasons. The fact that the human eye has a similiar non linear response is just a coincidence and has nothing to do with gamma correction (as many confusing articles claim) as the output will be linear again anyway.

Assuming that this understanding is right, in OpenGL we have the GL_SRGB8_ALPHA8 format for textures, which has 8bit per channel. However since the range of 0~255 is the same as a linear RGB texture, does this mean that to convert a linear 8bit per channel texture to sRGB the color values remain unchanged and a simple “flag” tells OpenGL: this 0~255 range is not linear so interpret them as a curve? What about sRGB 16bit per channel images (ex. loads from 16bit PNGs) ?

Also, the documentation of an engine which uses OpenGL states:

[ul]
[li]An intermediate buffer storing 8-bit color data should be sRGB. Failure to do so will result in artifacts when converting the data back to sRGB for viewing.
[/li]> [li]An intermediate buffer storing floating-point color data should be linear. There is no point to the sRGB conversion.
[/li]> [/ul]

Regarding floating-point textures that should be linear, these will never be shown directly to the view so not storing them in sRGB makes sense.
I don’t get why would artifacts appear when converting 8bit data back to sRGB though, as the range 0~255 is the same and a change of “interpretation” of these values would suffice.

I too want to know what happens to textures in the hardware when they are marked as sRGB.

I believe the artifacts being mentioned refer to storing the RGB data in an actual 8-bit linear format, rather than having the original 8-bits reside in a larger 16-bit format , which lacks the accuracy for conversion to sRGB for display.

I don’t get why would artifacts appear when converting 8bit data back to sRGB though, as the range 0~255 is the same and a change of “interpretation” of these values would suffice.

The range is the same. The values are not.

A linearRGB value of 128 corresponds to an sRGB value of 56. That distinction is important because you’re effectively losing color definition in half of your RGB range relative to the gamma corrected version. And remember: the gamma corrected version is what maps to how your display shows your image.

Essentially, sRGB gives you more effective precision in the higher values of its range than the lower ones. And that’s where you need that precision.

However since the range of 0~255 is the same as a linear RGB texture, does this mean that to convert a linear 8bit per channel texture to sRGB the color values remain unchanged and a simple “flag” tells OpenGL: this 0~255 range is not linear so interpret them as a curve?

It depends on what operation you’re talking about.

An sRGB texture is a texture that stores its RGB information in the sRGB colorspace. However, shader operations are assumed to want data in the linearRGB colorspace, not sRGB. So using an sRGB format means that texture fetches will [i]convert[/i] the pixels they read from sRGB to linearRGB.

Writes from a fragment shader to an FBO-attached image using an sRGB format may or may not perform conversion. Here, conversion has to be explicitly enabled with GL_FRAMEBUFFER_SRGB. The idea being that some operations will generate values in the sRGB colorspace (GUIs, for example. Most images were created in the sRGB colorspace), while others will generate values in linearRGB (normal rendering). So you have an option to turn on or off conversion.

The conversion also allows blending to read sRGB destination pixels, convert them to linear, blend with the incoming linearRGB values, and then convert them back to sRGB for writing.

Uploads to and downloads from an sRGB image will write and read the pixel values in the sRGB colorspace directly.

Thank you for the links! They were helpful to further clarify the sRGB colorspace. However the doubt about the artifacts upon conversion still remains:

[QUOTE=Alfonse Reinheart;1289203]The range is the same. The values are not.

A linearRGB value of 128 corresponds to an sRGB value of 56. That distinction is important because you’re effectively losing color definition in half of your RGB range relative to the gamma corrected version. [/QUOTE]

The values are indeed different, if you want to “represent” the same actual color value in linear space and sRGB, thus losing precision as you wrote. However what I believe OpenGL is doing (correct me if I’m wrong) is “converting” linear <-> sRGB which means roughly applying pow(1/2.2) or pow(2.2). Doing so the converted value will end up having the same value in the range in both linear and sRGB.

For example:

linear: 0.218 -> pow(1/2.2) -> sRGB: 0.5 Value in range 56 for both

After monitor gamma pow(2.2) : 0.218 again

So if I set up an intermediate buffer with linear RGB8 and then use it to draw into a SRGB8 backbuffer, OpenGL would not lose precision in the process by leaving the bit value as it is and use it to look up in srgb colorspace instead. But I’m pretty sure I’m missing something in this process?

linear: 0.218 -> pow(1/2.2) -> sRGB: 0.5 Value in range 56 for both

No, that’s not how it works. The linearRGB texture will store the value 0.218. The sRGB texture will store the value 0.5.

0.218, stored in an 8-bit integer will be 56. 0.5, stored in an 8-bit integer will be 128. 56 is not 128.

Think of it like unit conversion. 1 inche is 2.54 centimeters. If you have an buffer that stores inches and you store 1 inch in it, then it stores the value 1. If you have an inch buffer and store 2.54 centimeters in it, it will convert 2.54cm to 1in, then store the value 1.

Colorspaces are like units. 0.218-linearRGB is a value, and 0.5-sRGB is a value. They both mean the same thing, but they do not have the same numerical value. An 8-bit linearRGB buffer will store this value as 56. An 8-bit sRGB buffer will store this value as 128.

[QUOTE=Alfonse Reinheart;1289212]
Colorspaces are like units. 0.218-linearRGB is a value, and 0.5-sRGB is a value. They both mean the same thing, but they do not have the same numerical value. An 8-bit linearRGB buffer will store this value as 56. An 8-bit sRGB buffer will store this value as 128.[/QUOTE]

Thanks! I think I finally understood what sRGB is all about, and the coincidence that the human eye sees darker tones better than brighter also seem to be an happy coincidence after all, since using 8bit in sRGB will have more dark shades (which will still remain the same after monitor gamma, only being “unit converted” to linear).
Artifacts when converting 8bit linearRGB to 8bit sRGB also make sense now, because the two have different dark/bright precisions, so for example two dark sRGB shades will end up being the same one in linearRGB due to lack of precision.

Just to be clear, are the sRGB textures all stored efficiently in memory with 8-bits per channel, converted temporarily only when specifically requested for use in rendering, replacing a “cache” for every new texture that is bound?

The thing to bear in mind is that values stored in image files and values written to the framebuffer have always been sRGB (or something close to it). Explicit sRGB support means that OpenGL converts the values to and from linear intensities when reading and writing. This also affects linear filtering and blending (which historically were being done incorrectly, as they should be applied to linear intensities, not gamma-encoded values).

If a texture has a sRGB format, the implementation will convert from sRGB to linear as each pixel is read (ideally before any linear filtering, if enabled). If the framebuffer is sRGB (i.e. the attachment is sRGB and GL_FRAMEBUFFER_SRGB is enabled), values read for use in blending are converted from sRGB to linear, and the result is converted from linear to sRGB before being written.

I wouldn’t expect the conversions to be cached. As sRGB textures are limited to 8 bits per component, the hardware could reasonably use a lookup table to perform the conversion.