Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 2 of 2

Thread: Precision loss using GL_RGBA33F texture.

Hybrid View

  1. #1
    Junior Member Newbie RickA's Avatar
    Join Date
    Sep 2006
    Location
    NL
    Posts
    21

    Precision loss using GL_RGBA33F texture.

    Hello,

    I am working on a two pass algorithm during which I write into a GL_RGBA32F texture at the end of the first pass. I have noticed that when reading from that texture, both on the GPU during the second pass, and during a test readback to the CPU the most precise 12-bits of data are set to 0.

    For example, if I output 5.09 (= 5.090000... = 0x3F024DD3) to all of the channels:

    Code :
    out vec4 fragColor;
    void main() {
      fragColor = vec4(5.09);
    }

    and then read this back on the CPU:

    Code :
    float* bufferFloat = new float[1024*1024*4];
    glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, bufferFloat)

    I get value 0.50878906 (= 0x3F024000). The last 12-bits have been set to 0!

    This all happens using regular OpenGL (no specific profile) on Windows 7 on a system with two NVidia Quadro 5000 cards with driver version 326.19.

    Is there some precision setting that I've forgotten to set which makes this happen or is something wrong?

    Kind regards,
    Rick Appleton

  2. #2
    Junior Member Newbie RickA's Avatar
    Join Date
    Sep 2006
    Location
    NL
    Posts
    21
    While preparing a sample showcasing the problem I found the issue. The framework we are using silently created a GL_RGBA16F texture instead of a GL_RGBA32F texture.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •