PDA

View Full Version : bindless graphics + glMapBufferRange



MalcolmB
11-16-2011, 04:47 PM
I'm sure the answer is in the details of the spec, but I'm still not 100% sure of this after reading through them.

What is the behavior for Nvidia's bindless graphics when used with glMapBufferRange and the INVALIDATE_BUFFER_BIT.

I assume if the buffer is currently in use, INVALIDATE_BUFFER_BIT will cause a new buffer to be allocated and the old one to become owned by the driver (And freed once it's finished with it). However shouldn't this make the buffer non-resident, like a call to BufferData()? If I'm caching the residency state of the buffer, do I check to see if the GPU_ADDRESS of the buffer changed after a call to MapBufferRange, to know if I'm not working with a new memory area (which is not yet resident)?

If I'm using INVALIDATE_BUFFER_BIT, do I need to worry about any synchronization issues to do with mapping the buffer while it's in used by bindless rendering? The INVALIDATE_BUFFER_BIT seems to implies it shouldn't, but the spec doesn't specifically talk about this case.

Thoughts?

Alfonse Reinheart
11-16-2011, 05:22 PM
What is the behavior for Nvidia's bindless graphics when used with glMapBufferRange and the INVALIDATE_BUFFER_BIT.

It's not explicitly stated. However, it does say, "A buffer is also made non-resident implicitly as a result of being respecified via BufferData or being deleted." Invalidation can effectively be considered the equivalent of calling glBufferData. So I wouldn't expect the GPU address to be valid anymore.

Though there's an easy way to test. Create a buffer object, make it resident, then map it with INVALIDATE, then check to see if it's still resident with `IsBufferResidentNV`.

In general though, I would suggest making the buffer non-resident explicitly before mapping with invalidate. Or mapping at all, considering the synchronization caveats that come up with resident buffer mapping.

Dark Photon
11-17-2011, 05:41 AM
Do the test, but from my experience I can tell you that invalidating a buffer doesn't make it nonresident. For streaming VBO uploads, I make it resident once and then use it (fill, orphan, fill, orphan, ...)

Buffer residency for a buffer object (AFAIK) refers to the main buffer behind the buffer object that'll actually be used by the GPU for rendering, not the app-side buffer copies used for transfer that you're rotating among by orphaning.

Makes sense as the whole reason you're making the buffer resident is to get a GPU address so you can point the GPU directly to that main buffer.

However, I'm not a driver dev so I can't tell you for sure.

MalcolmB
11-17-2011, 01:25 PM
The buffer doesn't become non-resident after an invalidate map. However the GPU_ADDRESS I get isn't changing either, which maybe is correct?

I will say that the performance increase I'm getting is pretty significant. 2x-3x in the test cases I've tried so far.