Rigorious gamma bookeeping

In OpenGL there are functions for gamma correction.
However if you have different sources and want to blend stuff with each other this isn’t very handy.
It would be useful to, for any 1D,2D,3D,… texture/array be able to add a gamma value.
Thereby making it possible to easily make an algorithm to sort out mixing (addition, multiplication,etc…) of different textures/arrays automatically without extra efford on the user part.
(Here a programmer writing software for graphic operations is the user.)

Of course it will need some rules such as:
When adding two textures of different gamma value. The texture who’s gamma value is most different from gamma==1 would be converted to the closer gamma value of the other.
Then both can be used in a mathematical expression without distortions because of different gamma values.

I don’t know about the other api’s but the following functionality would be very handy:
Being able to let the graphic card ask the gamma value of the display devices being used and adapt to it automatically.
This of course asks for connections that offer doing this.

Please give some feedback if this has already been done or not and if it’s a good idea/way-of-doing-things or not.

When adding two textures of different gamma value. The texture who’s gamma value is most different from gamma==1 would be converted to the closer gamma value of the other.
Then both can be used in a mathematical expression without distortions because of different gamma values.

You do not want to perform operations on colors in a non-linear colorspace. That’s why when you use sRGB textures, blending, interpolation, and so forth all happen in linear colorspace. That is, it reads texels, converts them to linear, and then does the operation in question.

Math operations on colors only are meaningful in linear space. That’s why gamma-correct textures and framebuffers are so important.

That would be pretty pointless imo.
I may be a little ignorant on the matter, but i didnt even see much point in putting sRGB extensions in GL.
They dont really save you all that much. You can do these calculations manually if you want in the shader and as a post processing step, so why put it in the driver?
Sure it may save few opcodes, but is that worth it?

You’re absolutely right. Let’s rip out sRGB. And while we’re at it, let’s take out texture filtering altogether. After all:

“You can do these calculations manually if you want in the shader and as a post processing step, so why put it in the driver?
Sure it may save few opcodes, but is that worth it?”

</sarcasm>

Just because you can do something in a shader doesn’t make it a good idea. sRGB reads/writes are, currently, 100% free. They cost no actual performance, and the conversion is cheap enough that texture units can do them with minimal hardware support.

Compare that with every possible shader implementation:

1: Call pow(). The pow function is not known for its speed.

2: Use a texture lookup table. Which uses up precious bandwidth and texture unit resources that could be used for things that matter.

3: Use a uniform lookup table. This uses up precious uniform space that could be used for something else.

None of these are as fast as, well, free.

And even then, there are things that simply aren’t covered. Like sRGB-correct filtering. That is, doing filtering in a linear colorspace, even though the RGB values are in sRGB space. This is automatic and free with the sRGB texture format. None of the above can do that without moving all of the filtering logic into the shader.

Blending is also not covered. sRGB framebuffers can blend in linear-colorspace. The destination color, in sRGB, is read into linear-colorspace, blending happens, and the output is converted back into sRGB upon writing.

You said, “Sure it may save few opcodes, but is that worth it?” Worth what, exactly? All we do is have 2 extra texture formats and use them where appropriate. I wasn’t aware that this was an burden. All implementers have to do is have 2 256-entry lookup tables in their texture units/blending units and do the format conversion with a simple lookup. It can be directly built into the logic that turns normalized colors into floating-point values and vice-versa for writes.

So what exactly is it that we’re losing in order to have sRGB textures? What is the tradeoff?

Increasing complexity of the api, going away from ‘general purpose’ to ‘fixed function’.

You could well have a glsl function that does srgb to linear conversion through internal lut (and thus be fast avoiding cost of pows and what not), though you are onto something with texture filtering.
Wouldn’t keeping textures in linear space fix the problem altogether though?

Why isn’t blending covered?
Cant you render everything offscreen in linear colorspace and only go to sRGB with final ‘blit’ to window?

This approach is similarly reflected with multitude of texturing functions we have at our disposal, some pull division to sampler, some texture offsets, some coordinate normalization. Yeah, they all save you some time, and are now ‘free’, and then you end up with humongous api that is untestable and people bitch about GL implementations sucking.

As to the original suggestion, such a thing might be possible on current hardware. If the sRGB conversion table is not hard-coded into the texture/blend/read units, but instead could be changed, then it would be possible to have the texture’s gamma be a texture/sampler parameter.

I highly doubt that querying a monitor’s gamma is going to happen in the near future though. Most display devices are built for a gamma of 2.2, which is close enough to sRGB that you wouldn’t be able to tell the difference.

Wouldn’t keeping textures in linear space fix the problem altogether though?

Artists work in the sRGB colorspace. They save their images in sRGB. Are you going to tell them and Photoshop/GIMP/Paint.Net/pretty much every image creation program that they should be working in linear colorspace? Or are you suggesting that the tool pipeline convert all these images to linear?

And quite frankly, it’s more convenient that way, as the color values are pushed higher up in terms of brightness. That is, there’s more precision at the bright end of the color spectrum, where you really need it, than the lower-end. So you’re losing good precision by going to linear colorspace, unless you use floating-point textures. At which point, you have a performance tradeoff.

Why isn’t blending covered?
Cant you render everything offscreen in linear colorspace and only go to sRGB with final ‘blit’ to window?

And waste all that performance? Remember: the sRGB conversion is free. And not everyone needs to render everything offscreen and blit at the end.

This approach is similarly reflected with multitude of texturing functions we have at our disposal, some pull division to sampler, some texture offsets, some coordinate normalization. Yeah, they all save you some time, and are now ‘free’, and then you end up with humongous api that is untestable and people bitch about GL implementations sucking.

But throwing everything away isn’t a solution either. That’s throwing out good performance that you could be using.

You have to be judicious in what you lose and what you keep. You have to be practical about what constitutes a legitimate feature and what doesn’t. sRGB is very justified, based on the fact that it entails little API “bloat”, virtually free performance, and solves a very important problem in graphical rendering.

You don’t throw things out just because you can. You can do a lot of things. Why have different texture types at all? Buffer textures are good enough; just have everything be buffer textures. If you need 2D/3D/Cube/Array/etc sampling, just implement it yourself in the shader.

And quite frankly, it’s more convenient that way, as the color values are pushed higher up in terms of brightness. That is, there’s more precision at the bright end of the color spectrum, where you really need it, than the lower-end.

sRGB does not give you more (absolute) brightness.
In fact, ‘gamma-corrected’ images spend more bits on the darker colors. See it as kind of color compression that spends more bits where the human eye is more sensitive.
‘gamma-corrected’ images only appear brighter when they are not displayed on a device that implements the infamous 2.2 gamma curve.

Gamma correction is about being precise, accurate.

If you mean that gamma correction means correction to a certain value. Then you have missed the point a little bit.
I want that the graphic card and software can automatically and fast do correct gamma.

@Reinheart and everybody:
Most of the time thats true. But that’s not something really certain. What in a few years from new a superior and cheap oled process is being used that causes another curve than gamma = 2.2.

I mean a monitor is supposed to show the image, not calculate it. All calculations possible should be avoided being done on electronics of the display. They should be done by the graphics card. This is just a best practice.

Now for my original suggestion.
The gamma correction and blending is inspired by blenders (still work in progress) color management.

It would be nice if, in the future, OpenGL and OpenCL could be used to sped up the process. Photoshop already uses the graphic card for image manipulation. Gimp is also planning to do this. Blender probably will also do this eventually.

blender25 color management http://www.blendernation.com/2009/12/05/blender-2-5-color-management/

Here is a bit of information from the article:

A new feature in Blender 2.5 that’s hard to spot is color management. It ‘linearizes’ colors for internal processes such as rendering and compositing and applies the appropriate gamma corrections for your display device.

From Blender.org:

[quote]Blender 2.5 includes a first version of Color Management. Currently this is limited to ensuring Linear Workflow during the render pipeline – gamma corrected inputs are linearized before rendering, the renderer and compositor work in linear RGB color space, and is gamma corrected back to sRGB color space for display in the image editor.
Future work may include support for display profiles, LUTs, and finer grained control over input/output conversions.

[/QUOTE]

Being able to do gamma values for each texture would be very welcome in this case.

There are more articles about gamma correction on blendernation:
Blendernation.org search gamma correction http://www.blendernation.com/index.php?s=gamma+correction

And about the forum:
being able to go to another tab in my browser for looking up the name of the web page would be very practical.

Assuming gamma correction of 2.2 is enough is something very dangerous.

What in a few years from new a superior and cheap oled process is being used that causes another curve than gamma = 2.2.

You don’t buy a display device if it can’t provide decent color reproduction. And since just about everything is built around 2.2 gamma these days (sRGB is a basic part of the JPEG and MPEG standards, so that’s just about every movie and image), that display device would damage the color reproduction of every image it attempted to display.

Some really cheap device might use it, but you get what you pay for. The default 2.2 gamma isn’t going anywhere anytime soon.

I mean a monitor is supposed to show the image, not calculate it.

And that image comes in a colorspace. The monitor’s job is to display that image as the user intended: in a specific colorspace. In this case, with a gamma around 2.2. A monitor that cannot do this with decent color reproduction is broken.

Also, I would point out that most desktop monitors have at least basic gamma correction support. So the user can set the gamma for the monitor if he wants.

Being able to do gamma values for each texture would be very welcome in this case.

It’s only welcome if there is hardware support for it. Otherwise it’s a performance drag; it would be better to simply convert your colors to sRGB offline and let the hardware sRGB->linearRGB handle things.

Assuming gamma correction of 2.2 is enough is something very dangerous.

But that’s what Blender is doing. All their vaunted “color management” does is use sRGB textures and framebuffers. They say that future versions may include the ability to specify a gamma (that’s what the LUT means), but that’s rather vague.

I would like to weigh in and say that sRGB is all about being approximately correct instead of exactly wrong. The point of sRGB is not about exact color reproduction (despite what they say in the spec), but it is about coping with a freaking power law to a first order approximation.

For instance, take the distance attenuation of a light source. It should be 1/r^2, so the distance attenuation is a power law itself. But if you output the calculated values into a framebuffer that is going to be displayed by a monitor with gamma 2.2, your distance attenuation (against a black background) would be 1/r^4.4, way to steep than what you had in mind. Another point would be addition of light contributions, say, diffuse plus specular, or the sum of multiple light sources. So you enable your trusty GL_ONE, GL_ONE blending function, but strangely, the lighting is all washed out and saturates way too fast, and that’s because you are not calculating C = A + B, but what you are really doing is C = ( A^0.4545 + B^0.4545 )^2.2. Good luck with that blending function.

So sRGB came to the rescue of this. And it really doesn’t matter if the true gamma is 2.1, 2.2 or 2.3. sRGB is inherently approximate because it’s very birth consists of declaring the average 1998 CRT display as the standard.

I find It very sad that the language of the OpenGL spec seems to miss the point by hinting that sRGB is about a precision thing and “lower than 8-bit precision sRGB conversion would not be needed”. That is exactly what I needed when I had dynamically baked terrain textures in 5/6/5 bit (to save on memory). I remember that the ATI 9700 would sRGB convert these, but that suddenly newer cards silently didn’t do the conversion anymore. So I had to manually convert in the pixel shader, which I think is bad because it breaks orthogonality (in the same way as when the shader has to divide short texture coords by 32767). I didn’t even do pow(2.2), I used a simple square *= (which amounts to gamma 2.0) which is entirely sufficient, the users wouldn’t notice a difference visually.