Hi,
I’m a bit confused about texture formats and types. As I understand, the internal
texture format is independent from the type of data I supply to upload the
texture to the GPU. Then I think it should be no problem to upload a texture as
GL_UNSIGNED_SHORT, apply a shader by rendering into an FBO, and read the colorbuffer
back into a float array. In fact this works for me only when both types, the input texture
type and the output texture type are float.
E.g. I want to use single precision float values in the shader, so I upload the
texture with:
float* inputData = getSomeInputData();
float* outputData = new float [width * height];
glBindTexture(GL_TEXTURE_RECTANGLE_NV, inputTexID);
glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_FLOAT_R32_NV,
width, height, 0, GL_LUMINANCE, GL_FLOAT, inputData);
glBindTexture(GL_TEXTURE_RECTANGLE_NV, outputTexID);
glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_FLOAT_R32_NV,
width, height, 0, GL_LUMINANCE, GL_FLOAT, 0);
// attach output texture to the FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_RECTANGLE_NV, outputTexID, 0);
// perform the computation
cgGLBindProgram(fragmentProgram);
cgGLSetTextureParameter(inputParam, inputTexID);
cgGLEnableTextureParameter(inputParam);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
glBegin(GL_QUADS);
glVertex2f(0.0, 0.0);
glVertex2f(texSize, 0.0);
glVertex2f(texSize, texSize);
glVertex2f(0.0, texSize);
glEnd();
// read back results and print them out
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glReadPixels(0, 0, width, height, GL_LUMINANCE, GL_FLOAT, outputData);
I’m testing with a simple passthrough shader:
void passthrough(in float2 coords: WPOS,
uniform samplerRECT input,
out float output0 : COLOR0)
{
float value = texRECT(input, coords).r;
output0 = value;
}
This works fine. But when changing the type of the input texture, as in the following
code snippet, the result consists only of zeros
// changing input data to unsigned short
unsigned short* inputData = getSomeInputData();
float* outputData = new float [width * height];
glBindTexture(GL_TEXTURE_RECTANGLE_NV, inputTexID);
glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_FLOAT_R32_NV,
width, height, 0, GL_LUMINANCE, GL_UNSIGNED_SHORT, inputData);
glBindTexture(GL_TEXTURE_RECTANGLE_NV, outputTexID);
glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_FLOAT_R32_NV,
width, height, 0, GL_LUMINANCE, GL_FLOAT, 0);
// attach output texture to the FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_RECTANGLE_NV, outputTexID, 0);
// perform the computation
cgGLBindProgram(fragmentProgram);
cgGLSetTextureParameter(inputParam, inputTexID);
cgGLEnableTextureParameter(inputParam);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
glBegin(GL_QUADS);
glVertex2f(0.0, 0.0);
glVertex2f(texSize, 0.0);
glVertex2f(texSize, texSize);
glVertex2f(0.0, texSize);
glEnd();
// read back results and print them out
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glReadPixels(0, 0, width, height, GL_LUMINANCE, GL_FLOAT, outputData);
// ouput of 'outputData' gives always 0.0
I do not want to cast the inputData into float before processing because this takes
too much time.
My question is whether this type conversion is allowed in this way at all, and if not,
how can this be done properly.
I’d appreciate some hints
greetings
florg