Interactive Scale & Bias

I have an application that needs to achieve interactive scale and bias of a rendered image. I am running on a Radeon 9800 Pro that does not support dependant texture lookups or shaders, and I am not able to upgrade the driver. Currently I use an algorithm that draws the image “scale” number of times and accumulates the result. Each iteration is biased by “bias/scale”. The following results:

P = Final Pixel Value
I = Image Pixel Value
P = (I + bias/scale) * scale

This is very fast, but very inflexible. I am considering drawing the image, then using glCopyPixels, or glCopyTexSubImage, applying the scale and bias in the pixel transfer options. This would result in cleaner code, and a more flexible algorithm. My current algorithm ties up the blending functionality, so it is off limits when I use this code. The copy commands would alleviate this restriction.

Has anybody had success with glCopy* for performing interactive scale and bias? Any other ideas I’ve overlooked?

“a Radeon 9800 Pro that does not support dependant texture lookups or shaders”? What weird driver are you using?

It supports the ATI shaders, but not GLSL. I have a prototype for this that uses shaders, but I would like to avoid the “assembler” code I would need to write for ATI shaders. I am also considering using the register combiners to scale and bias the image. I hadn’t thought of it until last night, but that seems like a pretty good option. I use them to scale and bias gradient information for volume rendering, so simple level and width should be pretty easy.

there are certain hacks to use regular catalyst for laptops in case that is the reason you cant upgrade. look for “dhmodtool” or so.

And when in doubt, scoot on over here:
http://www.delphi3d.net/hardware/viewreport.php?report=1573

I think I settled in on using the register combiners for the task if the image fits in texture memory, otherwise using glCopyPixels.

Thanks for the Delphi link. That will come in very handy with our other platforms.

My plan fell apart this afternoon when I realized I get an “invalid value” error if I try and use non-integer values in the following call:

glTexEnvf(GL_TEXTURE_ENV, GL_RGB_SCALE_ARB, 2.0f); // WORKS
glTexEnvf(GL_TEXTURE_ENV, GL_RGB_SCALE_ARB, 1.1f); // INVALID VALUE

Anybody know why this is occuring? I do not see anything in the Blue Book that talks about the valid range for the scale value.

Originally posted by jtipton:
[b] My plan fell apart this afternoon when I realized I get an “invalid value” error if I try and use non-integer values in the following call:

glTexEnvf(GL_TEXTURE_ENV, GL_RGB_SCALE_ARB, 2.0f); // WORKS
glTexEnvf(GL_TEXTURE_ENV, GL_RGB_SCALE_ARB, 1.1f); // INVALID VALUE

Anybody know why this is occuring? I do not see anything in the Blue Book that talks about the valid range for the scale value. [/b]
It seems that there is some language missing from the core spec (this may be worth reporting to the editor). If you read the original extension spec , you’ll see that the valid values are 1.0, 2.0 and 4.0. The idea is that this selects a fixed hardware scalar which just does a shift.

GL_COMBINE should be flexible enough for what you want, though. Use one texture unit with MODULATE and the scale factor loaded into GL_TEXTURE_ENV_COLOR (and the other knobs and switches set appropriately). The second texture unit is set to ADD with the bias loaded into the colour. You will need to watch out for clamping, though (the x2 and x4 functions are handy here to work in a smaller range and scale up at the end, although of course you lose precision).

in the gl spec it states that only 1,2 and 4 are allowed.

Thanks for the help. I am going to try the multi-texture approach next. I was trying to perform both the scale and bias in the same combiner to avoid clamping after the scale was applied.

Alright, here is stumbling block #2. The constant color is clamped when set, so I can’t scale the input with a value greater than 1.

I have an input image loaded into texture memory. Here is my code that does not work:

glClientActiveTextureARB(GL_TEXTURE0_ARB);
glClientActiveTextureARB(GL_TEXTURE1_ARB);
        
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_3D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_CONSTANT_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA_ARB, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_ALPHA_ARB, GL_PREVIOUS_ARB);
if(bias < 0)
{
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_SUBTRACT_ARB);
    float c2[] = { -bias/scale, -bias/scale, -bias/scale, 0 } ;
    glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, c2);
}
else
{
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_ADD);
    float c2[] = { bias/scale, bias/scale, bias/scale, 0 } ;
    glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, c2);
}
texture->bind();
        
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_3D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_MODULATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_PREVIOUS_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_CONSTANT_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA_ARB, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_ALPHA_ARB, GL_PREVIOUS_ARB);
float c1[] = { scale, scale, scale, 0 } ;
glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, c1); // Is clamping occuring here?
texture->bind();
        
glBegin(GL_QUADS);
{
    glTexCoord3i(0, 0, 0); glVertex2i(0, 0);
    glTexCoord3i(1, 0, 0); glVertex2i(1, 0);
    glTexCoord3i(1, 1, 0); glVertex2i(1, 1);
    glTexCoord3i(0, 1, 0); glVertex2i(0, 1);
}
glEnd();

Here is my current algorithm that does work, using the framebuffer to accumulate the image “scale” number of times:

glClientActiveTextureARB(GL_TEXTURE0_ARB);
        
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_3D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_CONSTANT_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA_ARB, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_ALPHA_ARB, GL_PREVIOUS_ARB);
if(bias < 0)
{
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_SUBTRACT_ARB);
    float c2[] = { -bias/scale, -bias/scale, -bias/scale, 0 } ;
    glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, c2);
}
else
{
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_ADD);
    float c2[] = { bias/scale, bias/scale, bias/scale, 0 } ;
    glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, c2);
}
texture->bind();
 
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE);
glBlendEquationEXT(GL_FUNC_ADD_EXT);
      
for(; scale > 0; --scale)
{
    glColor4f(0, 0, 0, scale);
            
    glBegin(GL_QUADS);
    {
        glTexCoord3i(0, 0, 0); glVertex2i(0, 0);
        glTexCoord3i(1, 0, 0); glVertex2i(1, 0);
        glTexCoord3i(1, 1, 0); glVertex2i(1, 1);
        glTexCoord3i(0, 1, 0); glVertex2i(0, 1);
    }
}
glEnd();

The part I am trying to get rid of is the use of the blending operations and the loop.

If scale > 1, you can get more range by dividing it by 4 and setting GL_RGB_SCALE to 4.0. With your current code you can do that on each stage to handle scales up to 16.

If you’re having clamping problems due to using two stages, have a look at GL_ATI_texture_env_combine3 , assuming your dinosaur of a driver supports it.

In the even more unlikely case that your driver supports GL_ARB_color_buffer_float, you can use it to disable colour clamping entirely.

I have the ATI extension. It looks promising since I can do both the scale and bias in the same stage. Though, it will suffer from the same clamping issue when I set the constant color.

Allowed:

float color[] = {0.5, 0.5, 0.5, 1};
glTexEnvf(GL_TEX_ENV, GL_TEX_ENV_COLOR, color);

Allowed:

float color[] = {1, 1, 1, 1};
glTexEnvf(GL_TEX_ENV, GL_TEX_ENV_COLOR, color);

Not Allowed:

float color[] = {2, 2, 2, 1};
glTexEnvf(GL_TEX_ENV, GL_TEX_ENV_COLOR, color);
// color is clamped to (1, 1, 1, 1)

I am also developing on a laptop with an X1600 currently. The dhmodtool is really great, it makes it possible to use all drivers, that are usually only released for desktop PCs. I highly recommend upgrading your driver, using that tool.

It does not only allow you to program more easyly, it is also of great advantage, if you want to play some games on your laptop. Just do it, it is not complicated and you won’t regret it.

Jan.

This needs to run on systems that are already out in the field. Because these are medical devices, there are an endless number of validation tests and deployment issues involved in upgrading a driver on a system. This is one of those times when the realities of doing business, conflict with the joys of engineering.