Alright I done some code to convert 32 bits floats to 16 bits floats, the C type used for those 16bits floats are unsigned short int(2bytes); I just write in it the good binary information as mentionned in the GL_ARB_texture_float spec.
So I build an array tab[256][256] of unsigned short int and then I create my texture like this :
glGenTextures(1, &atan_texture);
glBindTexture(GL_TEXTURE_2D, atan_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA16F_ARB, 256, 256, 0, GL_ALPHA, GL_FLOAT, tab);
But windows crashes when doing the glTexImage2D stuff…
Also, when I replace GL_FLOAT by GL_UNSIGNED_SHORT the texture is well created but I need openGL to know that this texture contains floats information.
What am I doing wrong here?
Can fragment shaders access 32bits floats textures(I read somewhere that only vertex shaders could handle 32bits float textures and fragment shaders can handle 16bits floats)?
Cheers, Jeff.