Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: Being told when glBufferData is effectively complete?

  1. #1
    Junior Member Regular Contributor
    Join Date
    Jul 2010
    Posts
    130

    Being told when glBufferData is effectively complete?

    Hi,

    Is there a way to be notified that a buffer transfer is complete with OpenGL?
    As part of a simple test that I am doing this weekend, I have the following code:

    for (int n=0; n < 50; n++)
    {
    glDrawElements(lots_of_stuff), this draw call is accessing buffer "B[n]"
    glBufferData("B[n]", newData);
    }

    I am updating the buffer after (eg. not before) glDrawElements, so that the glDrawElements call, above, which is dependent on the buffer, doesn't stall the GPU. It seems to be OK but somehow I wonder if sometimes the transfer doesn't take so freaking long that glDrawElements has to wait for the buffer contents to be there.

    So I could play with two buffers and switch over when the buffer contents have been uploaded to GPU memory. But I need to be notified of this. Is this possible?

    Thanks,
    Fred

  2. #2
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,136
    Buffer data does not happen when you issue the command, but when the driver decides to transfer the data (when the buffer is actually needed). That is the part of the optimization, but sometimes can be problematic since you don't have any control over transferring data. Flush/Finish don't affect this behavior.

  3. #3
    Junior Member Regular Contributor
    Join Date
    Jul 2010
    Posts
    130
    Buffer data does not happen when you issue the command, but when the driver decides to transfer the data (when the buffer is actually needed)
    Strictly speaking I don't believe this is the case, otherwise this would mean things are very unoptimized. Consider the following scenario:

    glBufferData(large_buffer)
    glDrawElements(something_very_large) // this call does NOT use 'large_buffer'
    glDrawElements(something) // this call DOES use 'large_buffer' (through shaders)

    If the driver decides to transfer 'large_buffer' after glDrawElements(something_very_large) has been processed (Note I'm saying 'processed' and not 'issued' here), it is a bit silly, since the DMA engine of the GPU can deal with the transfer while doing its rendering.

    I think the GL_DYNAMIC_DRAW might suggest to the driver to do double buffering actually, since I have noticed DYNAMIC_DRAW buffers use more memory than STATIC_DRAW.

    Otherwise, isn't it possible to do what I want to do with Sync objects / fences? I am not familiar with them.

    In a background thread, I would do something like

    sync = glCreateSync(bufferId);
    glBufferData(bufferId);
    glBlockUntilBufferUploadComplete(sync)?

  4. #4
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,136
    You can believe it or not. It is a question of faith. But, the programming is not a religion. You can check everything.
    Measure the graphics memory allocation size before and just after "storing" data into a buffer. If you are using NV hardware I can pretty surely guess what will happen. Maybe a DYNAMIC_DRAW would change something in the allocation policy, but that also should be tried.

    Considering sync objects, please read ARB_sync spec and "nvidia quadro dual copy engines" to se how it works.

  5. #5
    Member Regular Contributor
    Join Date
    Aug 2008
    Posts
    445
    Quote Originally Posted by Aleksandar View Post
    Buffer data does not happen when you issue the command, but when the driver decides to transfer the data (when the buffer is actually needed). That is the part of the optimization, but sometimes can be problematic since you don't have any control over transferring data. Flush/Finish don't affect this behavior.
    You mean the driver transferring data to the GPU? Because OpenGL guarantees that the data specified by glBufferData will be copied before glBufferData returns control, so you can modify it straight away.

    I am updating the buffer after (eg. not before) glDrawElements, so that the glDrawElements call, above, which is dependent on the buffer, doesn't stall the GPU. It seems to be OK but somehow I wonder if sometimes the transfer doesn't take so freaking long that glDrawElements has to wait for the buffer contents to be there.

    So I could play with two buffers and switch over when the buffer contents have been uploaded to GPU memory. But I need to be notified of this. Is this possible?
    You could orphan the buffer before loading new data to it, but if you're keeping the buffer the same length + loading the full data each time with glBufferData it's possible the driver is doing this anyway, so you might not gain anything. If you are using glBufferSubData to reload data, it is more likely to provide a boost. Orphaning a buffer lets OpenGL that you're not going to use the existing data further, so it can carry on using the old data + allow you to modify a different internal buffer at the same time. See http://www.opengl.org/wiki/Buffer_Ob...-specification :


    Code :
    glBufferData(target, SameSizeAsBefore, NULL, GL_STREAM_DRAW);
    glBufferData/glBufferSubData(target, SameSizeAsBefore, newData, GL_STREAM_DRAW);

    What in particular are you looping + modifying the same buffer to try to achieve?

  6. #6
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,136
    Quote Originally Posted by Dan Bartlett View Post
    You mean the driver transferring data to the GPU? Because OpenGL guarantees that the data specified by glBufferData will be copied before glBufferData returns control, so you can modify it straight away.
    Yes. Exactly.
    There are two steps in sending data to graphics memory:
    1. Copying from the application memory space to the driver space (main memory).
    2. Transferring from driver space (main memory) to graphics memory.

    The first step is finished before glBufferData returns. It is also guaranteed that the second is finished befor actual usage of the buffer, but it is certainly not immediatelly after glBufferData call.

    Also, I don't think what fred_em is doing is actually the orphaning, according to the pseudo-code he provided.

  7. #7
    Junior Member Regular Contributor
    Join Date
    Jul 2010
    Posts
    130
    Code A)

    Code :
    voir *new_data = malloc(new_data_size);
    writeDataInto(new_data);
    glBufferData(target, new_data)

    Code B)

    Code :
    glBufferData(target, null, new_data_size)
    void *new_data = glMapBufferRange(target, WRITE_ONLY_AND_RELEVANT_INVALIDATION_BITS);
    writeDataInto(new_data);
    glUnMapBufferRange(target);

    For me, the difference between A) and B) is that B) gives a new_data pointer in driver memory space, whereas A) internally does a memcpy between application and driver memory. That's all. Is that 'orphaning'? Does orphaning mean 'please give me driver memory directly'? orphaning=buffer mapping?

    If I keep the same data size over and over again, the driver just keeps returning the same driver memory space pointer (eg. doesn't do a realloc). Is this why it's best to keep the same size?

    As a matter of fact I do not keep the same buffer size between frames. What do you guys recommend?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •