Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: Blending bug with uintBitsToFloat

Threaded View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Junior Member Newbie
    Join Date
    Apr 2012
    Posts
    6

    Blending bug with uintBitsToFloat

    I'm running into a situation where I'm packing a specific bit pattern into a channel of a 32-bit float texture. When I go to retrieve this value in another shader, I'm finding that the bit pattern has changed. I've narrowed the condition down to whether or not blending is enabled. My problem is that I need blending enabled, but I need my bit pattern preserved. If my destination drawbuffer has 0, and my source texel has my bit pattern, and I've set up glBlendFunc(GL_ONE, GL_ZERO), I would like glBlendEquation(GL_FUNC_ADD) to preserve my bit pattern, but this is not what I am seeing.

    Below is a minimal program to reproduce the problem, source files and shaders:

    blend_bug.cpp

    Compile the program like this:

    Code :
    g++ blend_bug.cpp -std=c++11 -g -Wall -O0 -lGL -lSDL -lGLEW -lGLU -o blend_bug

    And run it like this:

    To get correct behavior (green screen):
    Code :
    ./blend_bug

    To get incorrect behavior (red screen):
    Code :
    ./blend_bug 1

    I've tried to make this easy to reproduce. Any help is greatly appreciated!

    System config:
    Linux x86-64
    Geforce GTX 680
    NVIDIA Driver 352.30 (latest)
    OpenGL 4.5
    Last edited by amoffat; 08-19-2015 at 02:17 PM. Reason: updating system config

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •