ARB_texture_float dependant on type?

I have an array of pixel values ranging from 0.0 -> 4095.0 stored as a float array.
When I upload a texture like this:



glTexImage3D(  GL_TEXTURE_3D, 0, GL_ALPHA16F_ARB, POTwidth, POTheight, POTdepth, 0, GL_ALPHA, GL_FLOAT, pixels );


it all works as I would expect it to. My texture has values ranging from 0.0 → 4095.0 and my shader works as expected.

However, if I have an array of unsigned short pixels ranging from 0 → 4095 (the same values as stored in the float array) and do this:



glTexImage3D(  GL_TEXTURE_3D, 0, GL_ALPHA16F_ARB, POTwidth, POTheight, POTdepth, 0, GL_ALPHA, GL_UNSIGNED_SHORT, pixels );


the results are different.

I have queried the internal format and it is reporting back the requested format (GL_ALPHA16F_ARB).
I get the feeling that it is somehow clamping the pixels in the short array. I have read through the spec for ARB_texture_float and it doesn’t mention anything about treating the uploaded pixel values differently based upon the defined type.

At the moment I am using the short array and creating a temporary float array to upload each slice individually using glTexSubImage3D, but this seems like a pretty crappy solution.

I am assuming I am doing something dumb here, but can’t find anything in the spec to point me in the right direction.

Can anybody help me out here? Feel free to yell if I am doing something insanely stupid…

Thanks in advance
/andy

In reality you should use
glTexImage3D( GL_TEXTURE_3D, 0, GL_ALPHA32F_ARB, POTwidth, POTheight, POTdepth, 0, GL_ALPHA, GL_FLOAT, pixels );

notice the GL_ALPHA32F_ARB, so that no data conversion takes place.

For your second case, take a look at http://www.opengl.org/wiki/GL_EXT_texture_integer

Take a look at the glPixelTransfer discussion in the spec. In the “Convert to float” stage, it references a table explaining the conversion you’re seeing in your second example.

To summarize, byte/short/int types are treated as normalized values between [0…1] when fed as inputs to the pixel stage, so your resulting ALPHA16F texture has taken the 16 bits of integer precision and crushed them into 10 bits of mantissa, with all values between [0…1].

If you look at the ARB_texture_float spec, it says that the final clamp to [0…1] during pixel transfer is skipped for float internal formats. So, one way to get what you’re after in the second example is to use GL_RED_SCALE (etc) to manually denormalize the input.

EXT_texture_integer provides new client integer formats (ALPHA_INTEGER etc) that are not treated as normalized inputs. However, you’re not allowed to convert them to float internal formats. And integer internal formats have restrictions-- no filtering is allowed, and you need SM4 hardware.

Thanks guys.
Exactly what I was looking for.
/a