Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 3 of 3

Thread: 512MB buffer size limitation and GLSL

  1. #1
    Junior Member Regular Contributor
    Join Date
    Jul 2010
    Posts
    132

    512MB buffer size limitation and GLSL

    Hi,

    It seems there is a limitation in terms of buffer memory size with the nVidia drivers, in that although I can successfully allocate more than 512MB for a buffer object, running my shaders will always result in a GL_INVALID_VALUE if such large buffers are bound. The simple fact that the buffers are bound is sufficient to trigger the error - even if the shader that is executed does not write to the memory.

    Say for instance that I allocate a buffer of size=641MB this way:

    Code :
    glBindBufferARB(GL_TEXTURE_BUFFER, bufferid);
     
    glBufferDataARB(GL_TEXTURE_BUFFER, size, NULL, GL_STATIC_DRAW); // this buffer may be resized from times to times
     
    glBindBufferARB(GL_TEXTURE_BUFFER, 0);
    Following which I do:

    Code :
    glBindTexture(GL_TEXTURE_BUFFER, textureId);
    glTexBufferARB(GL_TEXTURE_BUFFER, GL_R32UI, bufferid);
    glBindTexture(GL_TEXTURE_BUFFER, 0);
    I then bind the shader image normally:

    Code :
    glBindImageTexture(1, textureId, 0, GL_FALSE, 0,  GL_READ_WRITE, GL_R32UI );

    and the shader code that I am running afterwards looks like this:

    Code :
    layout(r32ui) coherent uniform uimageBuffer u_Records;
     
    imageStore( u_Records, int(index) , uvec4(0) );
     
    // index is a value going from 0 to 6350*6350-1, clearing only the first 161290000 bytes of memory.

    In fact, even if I don't write any value in the shader (eg. the shader does nothing), I am still getting a GL_INVALID_VALUE error, once my draw call has been executed (only then).

    Is there a limitation with image buffers, with a size greater than 512MB?

    Thanks,
    Fred

  2. #2
    Junior Member Regular Contributor
    Join Date
    Jul 2010
    Posts
    132
    Nevermind I figured it out. glGetIntegerv(GL_MAX_TEXTURE_BUFFER_SIZE, &i); gives me 134M texels, that is, 512MB. It is weird glBufferData does not raise an error.

  3. #3
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,215
    You're not using glBufferData though, you're using glBufferDataARB, which was the old extension version defined in GL_ARB_vertex_buffer_object. OpenGL doesn't actually have a contract with you that these two functions will be identical on all implementations, and since GL_ARB_vertex_buffer_object was specified before texture buffers were, you probably shouldn't expect that the old extension would have much awareness of texture buffers.

    My recommendation is to stop using the -ARB versions of GL's buffer object functions and switch to using the core versions (i.e without the -ARB suffix) instead.

    Another potential reason is that GL_TEXTURE_BUFFER is just a binding point, and a buffer object can be validly bound to multiple binding points. So for example, you can legitimately create a huge texture buffer but subsequently bind the same buffer object to GL_ARRAY_BUFFER and expect usage of it on that binding point to work. From the behaviour you get you can infer that GL (or at least your GL implementation) doesn't perform it's validation until you actually make a draw call. You'll need to cross-check this with the GL spec to determine if it's allowed behaviour.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •