Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: Problwm with UBO and glBindBufferBase

  1. #1
    Junior Member Regular Contributor Kopelrativ's Avatar
    Join Date
    Apr 2011
    Posts
    214

    Problwm with UBO and glBindBufferBase

    I have a problem of when or how to use glBindBufferBase. I have set up a UBO as follows:
    Code :
        glGenBuffers(1, &fUBOBuffer);
        glBindBuffer(GL_UNIFORM_BUFFER, fUBOBuffer);
        glBufferData(GL_UNIFORM_BUFFER, sizeof(Data), NULL, GL_STREAM_DRAW);
        glBindBuffer(GL_UNIFORM_BUFFER, 0);
        fUBOidx = glGetUniformBlockIndex(fProgram, "GlobalData");
        glUniformBlockBinding(fProgram, fUBOidx, 0);
     
        glBindBufferBase(GL_UNIFORM_BUFFER, 0, fUBOBuffer);
    fUBOidx will get a valid value (neither 0 nor GL_INVALID_INDEX), and there is no error reported by OpenGL. Then I transfer data (projection matrix and view matrix) once every frame. I only have one UBO, and use slot 0 for it.
    Code :
        glBindBuffer(GL_UNIFORM_BUFFER, fUBOBuffer);
        glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Data), &data);
        glBindBuffer(GL_UNIFORM_BUFFER, 0);
    Problem is, the data isn't available always to the shader. If I play around, additionally calling glBindBufferBase() from "other places", it seems to improve the situation. But I don't think that should be needed? Am I missing something stupid?

    Does it depend on the currently program? I can't find any documentation that it does.

    Is anything needed to be fulfilled before the call, except having a value for the UBO?

    Does the UBO has to be bound?

    The application was working fine when I used traditional uniforms.

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,181
    This sounds a lot like the problem I noted here: http://www.opengl.org/discussion_boa...35#post1240435

    I haven't gotten round to checking things out any further with that one yet, unfortunately.

  3. #3
    Junior Member Regular Contributor Kopelrativ's Avatar
    Join Date
    Apr 2011
    Posts
    214
    The problem seems to happen with an AMD Radeon card. It is the same for Linux and Windows. Driver version is 8.96, and OpenGL version is 4.2.11631.

    I tested my application with an NVIDIA card, and then it is fine to simply call glBindBufferBase once, directly after the creation of the UBO.

  4. #4
    Junior Member Regular Contributor Kopelrativ's Avatar
    Join Date
    Apr 2011
    Posts
    214
    I found a similar problem here: .http://stackoverflow.com/questions/9...niform-buffers

    It is funny, that problem occurs when changing camera view, which is the same thing for me. I don't know what the camera has to do with it (except of course that the view matrix will be updated). I reduced the UBO to only one integer, and it is intermittently set to 0 while the expected value is a constant 4 (I never set it to anything but 4). The 0 happens when changing camera view, but stabilizes immediately at 4 again afterwords (in the next frame). I suppose this is a side effect of something else in the application, but I can't see why the UBO would stop working.

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,181
    What I have determined is that if you call glBindBufferBase before drawing anything that uses the UBO it "fixes" the problem. The "glBindBuffer (GL_UNIFORM_BUFFER, 0);" call after updating is irrelevant; this behaviour is observed whether the call is made or not. A further, possibly related, issue is that using glMapBufferRange to update the UBO is unreliable; sometimes it fails to get the mapping (flags and options used the very same as for other buffer object types which do work). This is not relevant to the main issue as the glBindBufferBase call remains needed even if updating with glBufferData or glBufferSubData. The rest of the code I was testing was essentially a copy and paste from the ArcSynthesis example (some variables renamed) so I'm reasonably confident that it's not an issue in my own implementation.

    Likewise AMD, same behaviour on both Catalyst 12.3 and 12.6

  6. #6
    Junior Member Regular Contributor Kopelrativ's Avatar
    Join Date
    Apr 2011
    Posts
    214
    As I have a repeatable problem depending on change of view matrix, I was able to trace to the root cause (doing arduous trial and error). When changing the view in my application, I allocate and free some OpenGL resources dynamically depending on what is needed. In this process, there is a call to glDeleteBuffers() for buffer 0. If I use a conditional statement so as not to call glDeleteBuffers for buffer 0, then the problem goes away.

    According to documentation, buffer 0 will be silently ignored. My guess is that there is a bug in the AMD drivers.

    Regarding performance, my shader execution went from 28ms to 24ms when the view matrix and projection matrix was defined in the UBO instead of the old style uniforms!

    I am right now extremly happy :-), and the use of a UBO made the source code much better.

  7. #7
    Junior Member Regular Contributor Kopelrativ's Avatar
    Join Date
    Apr 2011
    Posts
    214
    On NVIDIA, the execution time for this shader was improved from 39ms to 24ms! Too good to be true?

    I have an idea why. In the setup process for each frame, I enable one shader program at a time, and transfer uniforms. That means that there are a couple of "unnecessary" calls to glUseProgram. This is a state change that can cost time?

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •