Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: bindless graphics + glMapBufferRange

  1. #1
    Junior Member Regular Contributor
    Join Date
    Aug 2003
    Location
    Toronto, Canada
    Posts
    159

    bindless graphics + glMapBufferRange

    I'm sure the answer is in the details of the spec, but I'm still not 100% sure of this after reading through them.

    What is the behavior for Nvidia's bindless graphics when used with glMapBufferRange and the INVALIDATE_BUFFER_BIT.

    I assume if the buffer is currently in use, INVALIDATE_BUFFER_BIT will cause a new buffer to be allocated and the old one to become owned by the driver (And freed once it's finished with it). However shouldn't this make the buffer non-resident, like a call to BufferData()? If I'm caching the residency state of the buffer, do I check to see if the GPU_ADDRESS of the buffer changed after a call to MapBufferRange, to know if I'm not working with a new memory area (which is not yet resident)?

    If I'm using INVALIDATE_BUFFER_BIT, do I need to worry about any synchronization issues to do with mapping the buffer while it's in used by bindless rendering? The INVALIDATE_BUFFER_BIT seems to implies it shouldn't, but the spec doesn't specifically talk about this case.

    Thoughts?

  2. #2
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: bindless graphics + glMapBufferRange

    What is the behavior for Nvidia's bindless graphics when used with glMapBufferRange and the INVALIDATE_BUFFER_BIT.
    It's not explicitly stated. However, it does say, "A buffer is also made non-resident implicitly as a result of being respecified via BufferData or being deleted." Invalidation can effectively be considered the equivalent of calling glBufferData. So I wouldn't expect the GPU address to be valid anymore.

    Though there's an easy way to test. Create a buffer object, make it resident, then map it with INVALIDATE, then check to see if it's still resident with `IsBufferResidentNV`.

    In general though, I would suggest making the buffer non-resident explicitly before mapping with invalidate. Or mapping at all, considering the synchronization caveats that come up with resident buffer mapping.

  3. #3
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,194

    Re: bindless graphics + glMapBufferRange

    Do the test, but from my experience I can tell you that invalidating a buffer doesn't make it nonresident. For streaming VBO uploads, I make it resident once and then use it (fill, orphan, fill, orphan, ...)

    Buffer residency for a buffer object (AFAIK) refers to the main buffer behind the buffer object that'll actually be used by the GPU for rendering, not the app-side buffer copies used for transfer that you're rotating among by orphaning.

    Makes sense as the whole reason you're making the buffer resident is to get a GPU address so you can point the GPU directly to that main buffer.

    However, I'm not a driver dev so I can't tell you for sure.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Aug 2003
    Location
    Toronto, Canada
    Posts
    159

    Re: bindless graphics + glMapBufferRange

    The buffer doesn't become non-resident after an invalidate map. However the GPU_ADDRESS I get isn't changing either, which maybe is correct?

    I will say that the performance increase I'm getting is pretty significant. 2x-3x in the test cases I've tried so far.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •