Using the GL_ARB_imaging Extension

How exactly do you use this extension to increase the brightness/contrast of your OpenGL scene? I’ve tried entering glMatrixMode(GL_COLOR) and then trying glTranslatef(2.0f,2.0f,2.0f) but my scene doesn’t increase in brightness. Nothing changes. Am I missing something here? Can someone point me to some simple examples using the GL_ARB_imaging extension or maybe post some here. Thanks.

The color matrix is only applied to pixel transfers, such as glDrawPixels and glTexImage. But you can do this translation with a 1x1 texture with color value 0.2 which is added to the fragments color in the last texture unit. Look up ARB_texture_env_combine.

Bob, I think that the color matrix is a per-fragment processing available in the rendering, not in the image specification. So, ReAKtor’s method should work even though it will be extremely slow, if available, on hardware prior to GeForceFX and Radeon9500.

ReAKtor, I’ve already replied at comp.graphics.api.opengl (assuming the problem is the same).

I looked in both the Red book and the latest GL spec, and don’t see anything that indicates it operates on anything other than pixel transfers. But of course, I could have missed something.

Hmm, I’ve read the spec again and it seems you’re right : the color matrix stage only takes effect on the pixel transfer operations. That would explain why it isn’t working.

Well then, if color matrix can’t do it, some pixel shading capability can.

Thanks for the replies. Well now I’m really confused. I’m learning from the OpenGL SuperBible 2nd Edition? The chapter in question is chapter 15. I’ve just found a piece which seems to confirm what bob says:

“The Imaging Subset operations are part of the pixel transfer portion of the OpenGL rendering pipeline. This means that they don’t apply to the vector drawing primitives; however, you can draw a scene and then use the glReadPixels and glDrawPixels functions to apply the imaging operations to your scene. On systems with hardware acceleration for OpenGL imaging, you can still maintain high display rates.”

Ah. It sounds a bit more complicated now.
So is this not the best way of adding a brightness/contrast control?

It is probably the easiest way, but certainly not the fastest.

If you want to adjust brightness and contrast it’s actually better to do it on your monitor
You should setup software brightness and contrast only when your application needs to change it during the execution of your application. That is, if they never change : let the user adjust his monitor ; if they may change (for instance for “glowing” effects) then use OpenGL for it.

If you can deal with the extra rendering times, here are a few tips:

  • “Brightness” (usually defined as biasing the colors) can be implemented by blending a more or less bright quad over the entire display (framebuffer = framebuffer + quad, where quad can be anything between black and white).

  • “Contrast” can be implemented by reading the entire framebuffer to a texture, and drawing it back with RGB scaling (needs texture_env_combine). Can be costly (not too costly on modern cards though), and limits the choise of contrast to 1.0x, 2.0x, and 4.0x.

There are probably better solutions (render to texture + fragment program, for instance) if you are allowed to use modern extensions/functions.

As vincoof said, it is best to use these techniques when you want to achieve a special effect during a limited period of time, or you will hurt your over all frame rate too much.

Birghtness/contrast control is inherently a “whole framebuffer” operation, not a per fragment operation (at least not in the general case), so it will always cost you some extra time.

[This message has been edited by marcus256 (edited 04-16-2003).]

Related to the “whole-quad” technique, Contrast does not need to read the buffer back. You can do it with two quads

To lower contrast, you can render a first quad that scales the picture, then a quad that biases it.

To increase the contrast, render a first quad to bias the picture, and a second quad to scale it.

Originally posted by vincoof:
Related to the “whole-quad” technique, Contrast does not need to read the buffer back. You can do it with two quads

Yes, you are correct. You can use:

glBlendFunc( GL_DST_COLOR, GL_ONE );

and then draw a grey quad, which gives:

dst = dst_colquad_col + 1dst_col = (1+quad_col)*dst_col

…so we have a scaling range of 1.0 to 2.0 in (usually) 256 steps

Haven’t thought about it, actually. May use it myself in a thing I’m doing now.

Well in fact, when increasing contrast I think you need 4 quads instead of two, and maybe more. That is due to OpenGL’s color clamping to [0,1].

Let’s call ‘contrast_percentage’ the factor of contrast, in percent. 100 means that the picture does not change, 50 means that the contrast decreases, 200 means that the contrast increases.

For decreasing contrast :
GLfloat factor=contrast_percentage/100.f;
GLfloat scale=factor;
GLfloat bias=0.5f*(1.f-factor);
glBlendFunc(GL_DST_COLOR, GL_ZERO);
glColor(scale, scale, scale);
/* render quad /
glBlendFunc(GL_ONE, GL_ONE);
glColor(bias, bias, bias);
/
render quad */

The order is very important since color clamping applies for each framebuffer operation.

I’ve also coded and tested the ‘increasing contrast’ algorithm. I’ll post it later.

Here is what I’ve written and as far as I can tell it works :

if ( getContrast() > 100 )
{
GLfloat contrast = 0.01f * getContrast();
GLfloat gamma = 2.f * contrast / (1.f + contrast);
GLfloat delta = (1.f + contrast) * 0.5f;

//**** Scale to gamma.
glColor3f( 1.f, 1.f, 1.f );
glBlendFunc( GL_DST_COLOR, GL_ONE );
while ( gamma > 2.f )
{
glBegin( GL_QUADS );
glVertex2f( -1.f, -1.f );
glVertex2f( +1.f, -1.f );
glVertex2f( +1.f, +1.f );
glVertex2f( -1.f, +1.f );
glEnd();

  gamma	/= 2.f;

}
gamma -= 1.f;
glColor3f( gamma, gamma, gamma );
glBegin( GL_QUADS );
glVertex2f( -1.f, -1.f );
glVertex2f( +1.f, -1.f );
glVertex2f( +1.f, +1.f );
glVertex2f( -1.f, +1.f );
glEnd();

//**** Invert.
glColor3f( 1.f, 1.f, 1.f );
glBlendFunc( GL_ONE_MINUS_DST_COLOR, GL_ZERO );
glBegin( GL_QUADS );
glVertex2f( -1.f, -1.f );
glVertex2f( +1.f, -1.f );
glVertex2f( +1.f, +1.f );
glVertex2f( -1.f, +1.f );
glEnd();

//**** Scale to delta.
glColor3f( 1.f, 1.f, 1.f );
glBlendFunc( GL_DST_COLOR, GL_ONE );
while ( delta > 2.f )
{
glBegin( GL_QUADS );
glVertex2f( -1.f, -1.f );
glVertex2f( +1.f, -1.f );
glVertex2f( +1.f, +1.f );
glVertex2f( -1.f, +1.f );
glEnd();

  delta	/= 2.f;

}
delta -= 1.f;
glColor3f( delta, delta, delta );
glBegin( GL_QUADS );
glVertex2f( -1.f, -1.f );
glVertex2f( +1.f, -1.f );
glVertex2f( +1.f, +1.f );
glVertex2f( -1.f, +1.f );
glEnd();

//**** Invert.
glColor3f( 1.f, 1.f, 1.f );
glBlendFunc( GL_ONE_MINUS_DST_COLOR, GL_ZERO );
glBegin( GL_QUADS );
glVertex2f( -1.f, -1.f );
glVertex2f( +1.f, -1.f );
glVertex2f( +1.f, +1.f );
glVertex2f( -1.f, +1.f );
glEnd();
}
else if ( getContrast() < 100 )
{
GLfloat contrast = 0.01f * getContrast();
GLfloat alpha = contrast;
GLfloat beta = 0.5f * (1.f - contrast);

//**** Scale to alpha.
glColor3f( alpha, alpha, alpha );
glBlendFunc( GL_DST_COLOR, GL_ZERO );
glBegin( GL_QUADS );
glVertex2f( -1.f, -1.f );
glVertex2f( +1.f, -1.f );
glVertex2f( +1.f, +1.f );
glVertex2f( -1.f, +1.f );
glEnd();

//**** Bias to beta.
glColor3f( beta, beta, beta );
glBlendFunc( GL_ONE, GL_ONE );
glBegin( GL_QUADS );
glVertex2f( -1.f, -1.f );
glVertex2f( +1.f, -1.f );
glVertex2f( +1.f, +1.f );
glVertex2f( -1.f, +1.f );
glEnd();

glColor3f( 1.f, 1.f, 1.f );
}
// else { Do nothing if ( getContrast() == 100 ) }

[Edit: typos and tabs]

[This message has been edited by vincoof (edited 04-18-2003).]

Very interesting ideas. Well I decided to carry on playing with the GL_ARB_Imaging extension. I wrote code that doesn’t draw polygons but loads a bitmap and displays it on the screen using glDrawPixels. Then I used the Color Matrix to increase the brightness of the bitmap. It worked however I was very very suprised at just how slow it was. I know you guys said it would be slow but I didn’t expect it to be that slow. Especially on my GeForce 3 Titanium 200. It updated the screen roughly once every six seconds. Is this normal? Why would anyone use it if it’s so damn slow? Maybe my code isn’t the fastest. This is my main Drawing function:

GLint DrawGLScene(GLvoid)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glRasterPos2i(0, 0);
//glRasterPos2i(scroll++, 0);

temp += 0.1f;

if(GLimage == TRUE)
{
glMatrixMode(GL_COLOR);
glLoadIdentity();
glPushMatrix();
//glTranslatef(0.1f, 0.1f, 0.1f);
glScalef(temp, temp, temp);
//glScalef(temp, 0.0f, 0.0f);
glDrawPixels(BitmapInfo->bmiHeader.biWidth,
BitmapInfo->bmiHeader.biHeight,
GL_BGR, GL_UNSIGNED_BYTE, BitmapBits);
glPopMatrix();
}

return TRUE;
}

[This message has been edited by ReAKtor (edited 04-18-2003).]

[This message has been edited by ReAKtor (edited 04-18-2003).]

Ah. Silly me. I managed to get it to speed up by lowering the screen resolution from 1024x768 to 640x480.

It now updates about once a second which is is much better than what it was doing. It also seems to speed up when using black and white images. Lots of colour must slow down the process.

Reaktor,

I beleive that the color matrix operations are implemented in software (as most imaging functions), which means that instead of simply transferring data over the bus, the driver has to do a color * matrix multiplication for every single pixel, which means about 3 million CPU multiplications for a 640x480 RGB image, probably including int<->float conversions, clamping etc. (am I right?)

marcus, you’re right about the software implementation of the color matrix.
I’ve read a post from some Nvidia guy telling that “hardwiring the color matrix would eat too much transistors from now on”. I’ve read that one year ago.

The software implementation on such resolution is extremely slow. It’s just like pixel shading capabilities in software.

As a side note, the trick described above works on GeForce2 GTS at ~35 FPS in 1024x768

[This message has been edited by vincoof (edited 04-22-2003).]