Float textures Framebuffers GLSL color clamp to 1

Hi,
I am using Glew 1.7.0, GLSL 120 and FBO with floating point textures to do general purpose GPU computing. My algorithm works perfectly on NVidia hardware (GeForce GT220) but it will not on AMD (ATI Radeon HD 5800).

I did make sure GL_ARB_texture_float is supported. All Glew functions got an address. I tried many floating point texture formats Luminance / RGBA 32 or 16 floats or half floats. They will all clamp to 1.0 on the AMD hardware.

I called these lines to have it working but it didn’t :


glClampColorARB(GL_CLAMP_VERTEX_COLOR_ARB, GL_FALSE);
glClampColorARB(GL_CLAMP_READ_COLOR_ARB, GL_FALSE);
glClampColorARB(GL_CLAMP_FRAGMENT_COLOR_ARB, GL_FALSE);

I create my texture this way :


glGenTextures(1, &m_gluintTextureID);
glBindTexture(GL_TEXTURE_2D, m_gluintTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, m_uiTextureSizeWidth, m_uiTextureSizeHeight, 0, GL_LUMINANCE, GL_FLOAT, NULL );

Then I will create and link to the framebuffer :


glGenFramebuffersEXT(1, &m_uiFramebufferID);
glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, m_uiFramebufferID );
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, pTexture->getID(), 0);

Then I check the status which is successful ( GL_FRAMEBUFFER_COMPLETE_EXT ) :


GLenum statusEXT = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);

Meanwhile I unbind the FBO :


glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, 0 );

When it comes to drawing I do this :


glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, m_uiFramebufferID );
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, pTexture->getID(), 0);
// Bind the GLSL shader calling cwc::glShader::begin
// Activate source texture and draw a fullscreen quad
// Unbind everything such as cwc::glShader::end and
glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, 0 );

A GLSL fragment shader program used to test the floating point texture could look like this ( I use gl_FragData instead of gl_FragColor because I often have many color attachments ) :


//	FRAGMENT SHADER
#version 120

void main (void)  
{
    gl_FragData[0] = vec4(1.5, 0.0, 0.0, 0.0);
}

Then I will read the result (before unbinding the FBO) :


GLfloat* pkfDebug = new GLfloat[iColSize * iRowSize];
glReadBuffer( GL_COLOR_ATTACHMENT0_EXT );
glReadPixels(0, 0, iRowSize, iColSize, GL_LUMINANCE, GL_FLOAT, pkfDebug);

And see that on NVidia I get 1.5 and on AMD I get 1.0.

I use the latest Catalyst drivers : 11.10 for Windows 7 64 bits. If you can give me hints of what to do to have the floating point textures not to clamp, I would be very happy.

Thanks

Laurent

You don’t need this


glClampColorARB(GL_CLAMP_VERTEX_COLOR_ARB, GL_FALSE);
glClampColorARB(GL_CLAMP_READ_COLOR_ARB, GL_FALSE);
glClampColorARB(GL_CLAMP_FRAGMENT_COLOR_ARB, GL_FALSE);

And see that on NVidia I get 1.5 and on AMD I get 1.0.

That would be a driver bug. Have you tried another format such as GL_RGBA32F_ARB?

Completely blind shot here but


glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

I’m pretty sure GL_CLAMP isn’t supossed to be used as a WRAP paramter any more and GL_CLAMP_TO_BORDER or GL_CLAMP_TO_EDGE should be used instead. Maybe that’s making the driver go wild.

EDIT:
Actualy, it’s probably this:

format
Specifies the format of the pixel data.
The following symbolic values are accepted:
GL_STENCIL_INDEX,
GL_DEPTH_COMPONENT,
GL_DEPTH_STENCIL,
GL_RED,
GL_GREEN,
GL_BLUE,
GL_RGB,
GL_BGR,
GL_RGBA, and
GL_BGRA.

In your call to


GLfloat* pkfDebug = new GLfloat[iColSize * iRowSize];
glReadBuffer( GL_COLOR_ATTACHMENT0_EXT );
glReadPixels(0, 0, iRowSize, iColSize, GL_LUMINANCE, GL_FLOAT, pkfDebug);

GL_LUMINANCE is probably messing it up. Try GL_RED instead.

Thanks a lot for the help!

Using GL_CLAMP_TO_EDGE made it. I did left GL_LUMINANCE in ReadPixels.

I’m surprised that made it work since the documentation states GL_CLAMP is valid : OpenGL glTexParameter documentation

It’s the same for glReadPixels, the documentation found there states GL_LUMINANCE is valid : OpenGL glReadPixels documentation

Where did you saw GL_CLAMP shall be replaced with GL_CLAMP_TO_EDGE?

I’m not exactly sure, which probably means I read it on the GL 3.3 specification.

Notice that the man page you linked is, if I’m not mistaken, related to previous GL versions. You didn’t explicitly say which context of GL you were using (just your GLSL version in use), so I assume you may have just picked the highest available to you.

If you check the 3.3 man pages:
http://www.opengl.org/sdk/docs/man3/

for glTexParameter, you’ll see that GL_CLAMP is no longer listed as valid token. http://www.opengl.org/sdk/docs/man3/xhtml/glTexParameter.xml

There’s aparently other tokens changed from the man version you posted, like the compare mode.

Btw, if you are using 3.3 and since you are using GLEW, you should have the entry points already defined without the EXT and ARB postfixes, so you can just use glBindFramebuffer, GL_COLOR_ATTACHMENT0, etc. Although they should act the same, I’m pretty sure the drivers could mess up here and pack a bug within different entry points (I observed a similar effect before on the previous AMD drivers).

I got too excited this morning…

Using GL_LUMINANCE would work only with RGBA32F textures (which is not the best way to read the 128 bits data).

To have the GL_LUMINANCE32F_ARB textures not to clamp, I tested every parameter changes : using GL_RED in readPixels was the solution.

I did removed the glClampColorARB calls.

Thanks, I will use up to date documentation now.

Alright, one more thing. I am pretty sure (as in, as sure as I was before with the other tokens) that the correct token you want to use on glTexImage2D is not GL_LUMINANCE32F_ARB but GL_R32F, at least if you are using 3.3+ core context.

From what I understand, LUMINANCE32F_ARB got “promoted” as R32F.

It’s very possible that drivers will mess up with the older token too.

In other words, if you are on core 3.3, use R32F for texture storage allocation and GL_RED + GL_FLOAT when reading the pixels. That should shield you from further token hell madness.

From what I understand, LUMINANCE32F_ARB got “promoted” as R32F.

No, it did not. LUMINANCE and RED don’t mean the same thing. LUMINANCE was dropped in 3.1, because it was weirdly specified. LUMINANCE wasn’t a color-renderable format anyway, while RED is. And if you really want, you can emulate LUMINANCE with RED with texture swizzling.

This is a nice tutorial you got there! I shall spend some time reading it.

Learning Modern 3D Graphics Programming Through OpenGL

I still have trouble to read unclamped data from GL_RGBA32F textures. I am now using OpenGL 3.3 tokens.

Right after writing to the FBO I call :


glReadBuffer( GL_COLOR_ATTACHMENT0_EXT );
glReadPixels(0, 0, uiPixelWidth, uiPixelHeight, GL_RGBA, GL_FLOAT, m_pkfPixels );

The data will be clamped to [0; 1].

The only way I managed to read back the unclamped data is through a PBO :


//  Copy data from FBO
glBindBuffer( GL_PIXEL_PACK_BUFFER_ARB, gluiPBOId );
glReadPixels(0, 0, m_uiWidth, m_uiHeight, GL_RGBA, GL_FLOAT, NULL);
glBindBuffer( GL_PIXEL_PACK_BUFFER_ARB, 0 );


// And then copy data to CPU memory
glBindBuffer( GL_PIXEL_PACK_BUFFER, in_uiPBOID );
GLfloat* ptr = (GLfloat*)glMapBuffer( GL_PIXEL_PACK_BUFFER, GL_READ_ONLY );

memcpy( m_pkfPixels, ptr, uiPixelWidth * uiPixelHeight * 4 * sizeof(GLfloat) );

glUnmapBuffer( GL_PIXEL_PACK_BUFFER_ARB );
glBindBuffer( GL_PIXEL_PACK_BUFFER_ARB, 0 );

Isn’t glReadPixels supposed to be able to read unclamped float data?

Sounds like a driver bug. If the only difference is that you use a buffer object in one case, then clearly there’s something going wrong in the driver. However, if you were rendering to a GL_R32F framebuffer, consider reading using GL_RED rather than GL_RGBA.