I would like to ask a clarification about the sRGB color space and its 8bit per channel representation in OpenGL. After digging into the subject, I found out that storing images in sRGB in the back buffer is just made to compensate the opposite gamma operation done by the monitor which "imitates" what the old CTR did by nature for compatibility reasons. The fact that the human eye has a similiar non linear response is just a coincidence and has nothing to do with gamma correction (as many confusing articles claim) as the output will be linear again anyway.

Assuming that this understanding is right, in OpenGL we have the GL_SRGB8_ALPHA8 format for textures, which has 8bit per channel. However since the range of 0~255 is the same as a linear RGB texture, does this mean that to convert a linear 8bit per channel texture to sRGB the color values remain unchanged and a simple "flag" tells OpenGL: this 0~255 range is not linear so interpret them as a curve? What about sRGB 16bit per channel images (ex. loads from 16bit PNGs) ?

Also, the documentation of an engine which uses OpenGL states:

  • An intermediate buffer storing 8-bit color data should be sRGB. Failure to do so will result in artifacts when converting the data back to sRGB for viewing.
  • An intermediate buffer storing floating-point color data should be linear. There is no point to the sRGB conversion.
Regarding floating-point textures that should be linear, these will never be shown directly to the view so not storing them in sRGB makes sense.
I don't get why would artifacts appear when converting 8bit data back to sRGB though, as the range 0~255 is the same and a change of "interpretation" of these values would suffice.