But this does not load the values in the texture. All the values in the texture are 0. How do we create a texture that will hold unisigned integer values ?
http://www.opengl.org/sdk/docs/man/
check out glTexImage2D
The internal format should be GL_RGBA16 and if the graphics card supports it, the pixels you supply are directly uploaded.
If you use GL_RGBA, the pixels you supply are probably converted to GL_RGBA8
To know if the GPU suuports it, you would need to check out the documents from nVidia and ATI. Other companies don`t publish this info AFAIK.
I work on NVIDIA GeForce 8800 with Linux. It does support the EXT_texture_integer and EXT_gpu_shader4 extensions. But I cannot still get the texture to load unsigned integer values. I tried the internal format as GL_RGBA16, but it gave me weird results. Some values were loaded in the texture and in a random manner. All other values were still zero.
I feel there should be a format like GL_RGBA32 to make this work. Any clues ?
Check the gluErrorString to see if you have a problem in your texture declaration. However, all values on GPU are float values except if you work with EXT_texture_integer (internal format = GL_RGBA32UI_EXT). You can also explicitly avoid clamping for texture using ARB_color_buffer_float extension.
So you can try with the “right” internal format and check gluErrorString if it doesn’t work !