Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: Shader storage buffer bug in Catalyst 13.4

Hybrid View

  1. #1
    Intern Contributor
    Join Date
    Jul 2003
    Location
    Faro, Portugal
    Posts
    90

    Shader storage buffer bug in Catalyst 13.4

    Hello guys,

    AMD added support for OpenGL 4.3 in their latest 13.4 driver but it seems they have a bug on their implementation of Shader Buffer Objects. I have a shader that increments an atomic counter and uses it as an index to write to a Shader Buffer Object buffer that hold points data. The buffer is big enough to old 100.000 points but the shader is only writting the first 1696 points.

    I can see this because after running the shader I map the atomic counter and the point buffer to look at their contents. The point buffer is only filled to the first 1696 positions although the atomic counter was incremented beyond that value. It seems the shader is assuming that the array has only 1696 elements, this is confirmed by the bokehPoints.length() that I'm writting into the buffer which returns precisely 1696. The extension specification says that we can query the size of the attached shader buffer object with glGetInteger64i_v(GL_SHADER_STORAGE_BUFFER_SIZE, 1, &value) which in my case it returns the expected value of 3.200.000 which represents sizeof(Point) * 100000 and not sizeof(Point) * 1696.

    So to sum up, the shader is assuming that the array is smaller than it actually is. Note that I have no problems running this stuff on NVidia cards so I'd like to know if anyone is experienced the same problems with AMD or if it's me who is doing something wrong.

    This is how the buffer is created:

    Code :
    glGenBuffers(1, &bokehPointsBufferID);
    glBindBuffer(GL_SHADER_STORAGE_BUFFER, bokehPointsBufferID);
    glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(Point) * 100000, NULL, GL_DYNAMIC_DRAW);
    glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);

    These are the rendering commands:

    Code :
    //Reset the value of the atomic counter and bind it.
    GLuint resetValue = 0;
    glNamedBufferSubDataEXT(depthOfField.getBokehPointsAtomicCounter(), 0, sizeof(GLuint), &resetValue);
    glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, 0);
     
    //Bind the points buffer.
    glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, bokehPointsBufferID);
     
    //Draw full screen quad.
    //NOTE: I know glBegin/glEnd is evil.
    glBegin(GL_QUADS);
    glVertex3f(0, 0, 0);
    glVertex3f(1, 0, 0);
    glVertex3f(1, 1, 0);
    glVertex3f(0, 1, 0);
    glEnd();
     
    glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT | GL_ATOMIC_COUNTER_BARRIER_BIT);

    Declaration of the atomic counter and shader buffer object in the shader:

    Code :
    struct BokehPoint
    {
        vec4    position;
        vec4    colorAndRadius;
    };
     
    layout (binding = 0, offset = 0) uniform atomic_uint bokehPointCounter;
     
    layout (packed, binding = 1) buffer BokehPointBuffer
    {
        writeonly BokehPoint bokehPoints[];
    };

    And this is how I write to the buffer in the shader:

    Code :
    uint index = atomicCounterIncrement(bokehPointCounter);
    bokehPoints[index].position = vec4(texCoord, 0.0, float(bokehPoints.length()));
    bokehPoints[index].colorAndRadius = vec4(color * scale, radius);

  2. #2
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    What happens if you use binding 0 rather than binding 1 for your storage buffer?

  3. #3
    Intern Contributor
    Join Date
    Jul 2003
    Location
    Faro, Portugal
    Posts
    90
    Quote Originally Posted by Alfonse Reinheart View Post
    What happens if you use binding 0 rather than binding 1 for your storage buffer?
    The problem remains.

  4. #4
    Intern Contributor
    Join Date
    Jul 2003
    Location
    Faro, Portugal
    Posts
    90
    So, anybody got any more ideas? I'm really stuck on this.

  5. #5
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Wait for AMD to fix their drivers. Or rewrite your code to not use SSBOs.

  6. #6
    Intern Contributor
    Join Date
    Jul 2003
    Location
    Faro, Portugal
    Posts
    90
    Oh well, it doesn't seem like I have a choice.
    I've tested their new Beta driver version 13.6 and it still has the same issue so there's no hope for the near future. Let's hope they fix it on later releases.

    Thanks for the help.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •