loading GL_RGBA16F to glTexImage2D

I have a texture made up of RGBA half floats, what is the correct way to send it to glTexImage2D?

I thought:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_HALF_FLOAT, data);

but according to the manualhttp://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
GL_HALF_FLOAT is not supported, only GL_FLOAT, is it only possible to load single float data?

Thanks,

James

it actually works with GL_FLOAT. yes, it confused me a bit too, especially when you need to read back HALF_FLOAT texture, you pass GL_HALF_FLOAT. but it makes sense, with glTexImage2D you already specify precision with GL_RGBA16F(sized internalFormat), but with glGetTexImage, you don’t pass internalFormat, only pixelFormat and Type, but you still need to specify the precision, so you use GL_HALF_FLOAT.

This is correct.

The internalFormat parameter describes the texture as it’s stored by the GPU/driver/OpenGL.
The format and type parameters describe the data that you’re sending in the last parameter of your glTexImage call.

These do not necessarily have to match; your driver will do a conversion if required.

Thanks for the answers guys, but i am still confused.

My pixel data that i have prepared on the CPU is already an array of 16 bit half floats. I thought the last three arguments for glTexImage2D tell openGL how to read my data, so i want it read as format : GL_RGBA, type: GL_HALF_FLOAT, data : a pointer to 16 bit half float data

Surely I need to tell openGL that the data i have prepared is half float and not single float? What if i were to pass an array of 32 bit float data for opengl to convert to GL_RGBA16F instead, how would openGL know the difference in the data’s size? I’m sorry if this doesn’t make sense, maybe some pseudo code would help


float16 data[w*h*4]; //16 bit float
//initialise data
...
//upload
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_FLOAT, data);


float data[w*h*4]; //32 bit float
//initialise data
...
//upload
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_FLOAT, data);

>Nowhere-01
How does opengl know that float16 data[] and float data[] are different sizes?

[QUOTE=James A.;1252352]My pixel data that i have prepared on the CPU is already an array of 16 bit half floats. I thought the last three arguments for glTexImage2D tell openGL how to read my data, so i want it read as format : GL_RGBA, type: GL_HALF_FLOAT, data : a pointer to 16 bit half float data

Surely I need to tell openGL that the data i have prepared is half float and not single float?[/QUOTE]
Yes; if data points to half-float values, the type parameter should be GL_HALF_FLOAT.

I believe that its omission from the manual page is an oversight, i.e. it simply wasn’t added when GL_ARB_half_float_vertex was promoted to core. The specification implies that GL_HALF_FLOAT is valid for any command which accepts format/type arguments describing client-side data (table 8.2 in the 4.3-core specification).

As long as you have support for GL_ARB_half_float_pixel or OpenGL 3.0(ish) then it should accept GL_HALF_FLOAT as the type argument.

What problem are you having? Is glGetError() returning errors, or is it crashing/uploading incorrect data? Does the code work when other data types are used?

Thanks for clearing that up,

I am not having any problems. I’m sorry if this seems like an odd question, but I was just looking up if HALF_FLOAT was the correct way to do it and since it wasn’t listed I thought I would ask.

Just be sure to test the code on a variety of GPU’s to make sure things are being done consistently by the various manufacturers. It’ll really bite you in the butt if you go too far and then find out that something which is supported by one doesn’t work as expected on another. Newer API features are often buggy like this.