Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 8 of 8

Thread: loading GL_RGBA16F to glTexImage2D

  1. #1
    Junior Member Regular Contributor
    Join Date
    Apr 2006
    Location
    Kyoto.
    Posts
    129

    loading GL_RGBA16F to glTexImage2D

    I have a texture made up of RGBA half floats, what is the correct way to send it to glTexImage2D?

    I thought:
    glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_HALF_FLOAT, data);

    but according to the manual
    http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml

    GL_HALF_FLOAT is not supported, only GL_FLOAT, is it only possible to load single float data?

    Thanks,

    James

  2. #2
    Member Regular Contributor Nowhere-01's Avatar
    Join Date
    Feb 2011
    Location
    Novosibirsk
    Posts
    251
    it actually works with GL_FLOAT. yes, it confused me a bit too, especially when you need to read back HALF_FLOAT texture, you pass GL_HALF_FLOAT. but it makes sense, with glTexImage2D you already specify precision with GL_RGBA16F(sized internalFormat), but with glGetTexImage, you don't pass internalFormat, only pixelFormat and Type, but you still need to specify the precision, so you use GL_HALF_FLOAT.
    Last edited by Nowhere-01; 07-03-2013 at 12:47 AM.

  3. #3
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,217
    Quote Originally Posted by Nowhere-01 View Post
    it actually works with GL_FLOAT. yes, it confused me a bit too, especially when you need to read back HALF_FLOAT texture, you pass GL_HALF_FLOAT. but it makes sense, with glTexImage2D you already specify precision with GL_RGBA16F(sized internalFormat), but with glGetTexImage, you don't pass internalFormat, only pixelFormat and Type, but you still need to specify the precision, so you use GL_HALF_FLOAT.
    This is correct.

    The internalFormat parameter describes the texture as it's stored by the GPU/driver/OpenGL.
    The format and type parameters describe the data that you're sending in the last parameter of your glTexImage call.

    These do not necessarily have to match; your driver will do a conversion if required.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Apr 2006
    Location
    Kyoto.
    Posts
    129
    Thanks for the answers guys, but i am still confused.

    My pixel data that i have prepared on the CPU is already an array of 16 bit half floats. I thought the last three arguments for glTexImage2D tell openGL how to read my data, so i want it read as format : GL_RGBA, type: GL_HALF_FLOAT, data : a pointer to 16 bit half float data

    Surely I need to tell openGL that the data i have prepared is half float and not single float? What if i were to pass an array of 32 bit float data for opengl to convert to GL_RGBA16F instead, how would openGL know the difference in the data's size? I'm sorry if this doesn't make sense, maybe some pseudo code would help

    Code :
    float16 data[w*h*4]; //16 bit float
    //initialise data
    ...
    //upload
    glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_FLOAT, data);

    Code :
    float data[w*h*4]; //32 bit float
    //initialise data
    ...
    //upload
    glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_FLOAT, data);

    >Nowhere-01
    How does opengl know that float16 data[] and float data[] are different sizes?


  5. #5
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    498
    Quote Originally Posted by James A. View Post
    My pixel data that i have prepared on the CPU is already an array of 16 bit half floats. I thought the last three arguments for glTexImage2D tell openGL how to read my data, so i want it read as format : GL_RGBA, type: GL_HALF_FLOAT, data : a pointer to 16 bit half float data

    Surely I need to tell openGL that the data i have prepared is half float and not single float?
    Yes; if data points to half-float values, the type parameter should be GL_HALF_FLOAT.

    I believe that its omission from the manual page is an oversight, i.e. it simply wasn't added when GL_ARB_half_float_vertex was promoted to core. The specification implies that GL_HALF_FLOAT is valid for any command which accepts format/type arguments describing client-side data (table 8.2 in the 4.3-core specification).

  6. #6
    Member Regular Contributor
    Join Date
    Aug 2008
    Posts
    456
    As long as you have support for GL_ARB_half_float_pixel or OpenGL 3.0(ish) then it should accept GL_HALF_FLOAT as the type argument.


    What problem are you having? Is glGetError() returning errors, or is it crashing/uploading incorrect data? Does the code work when other data types are used?

  7. #7
    Junior Member Regular Contributor
    Join Date
    Apr 2006
    Location
    Kyoto.
    Posts
    129
    Thanks for clearing that up,

    I am not having any problems. I'm sorry if this seems like an odd question, but I was just looking up if HALF_FLOAT was the correct way to do it and since it wasn't listed I thought I would ask.

  8. #8
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25
    Just be sure to test the code on a variety of GPU's to make sure things are being done consistently by the various manufacturers. It'll really bite you in the butt if you go too far and then find out that something which is supported by one doesn't work as expected on another. Newer API features are often buggy like this.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •