PDA

View Full Version : Fences problem



Bobo.Bobo
05-27-2004, 06:12 AM
Hi,
I'm trying to find something about fences but there is nothing.
I don' know how it works. I have VAR NVidia demo and there are 4 buffers and 4 fences. They put data to one buffer and after that call SetFence to ALL_COMPLETED but can you tell me how they know witch buffer is actually used while testing with TestFence? There is no connection between fences and buffers. There is nothing like:
SetFenceToBuffer or something. Buffer is not a real buffer - it's a pointer to a one big buffer divided to four smaller buffers. There is nothing like: BindFenceToThisMemory. I don't get it.

JanHH
05-27-2004, 08:04 AM
what are fences (related to opengl)?

Bobo.Bobo
05-27-2004, 08:10 AM
I mean GL_NV_FENCE extension.

yooyo
05-27-2004, 08:45 AM
OpenGL is a client/server architecture. When you use VAR/PDR/PBO/VBO client actually have access to server memory. OpenGL also have rendering queue. Every GL call put some opcodes in rendering queue and GPU executes it in FIFO style. So, when you put some data in GPU memory and ask GPU for expensive operation this mean data should be processed by GPU and CPU should leave this data until GPU finish operation on it.

Fences are way to keep sync CPU and GPU. For example, if you start some expensive GPU operation (like rendering a bunch triangles) you can set fence after that call and later do test fence to check is expensive operation finished so this memory buffer can be used again. Usualy, this memory buffer is in 3d card memory and any access from CPU while GPU use data from this mem buffer, run to app crash or damaged data.

For example... if you have big vertex buffer in GPU memory, and call glDrawElements, do not touch this memory buffer until GPU finish glDrawElements. App have to wait by calling glFinish or use some other buffer to store new data. Bad thing about glFinish is that call force CPU to wait GPU to finish all pending operations in rendering queue. But time is "expensive" and CPU can do some other computations while GPU process request.

Nvidia introduce fences so app can check is some of operations in queue finished and app can reuse same memory buffers on server for another operation.

If app run out of free server memory buffer, app can call glFinishFenceNV to force CPU to wait until GPU finish operation on that memory buffer.

yooyo

jwatte
05-28-2004, 01:38 PM
The connection between fences and buffers is implicit in your command stream. When you finish a fence, you ensure that all operations up until the fence was set, have completed. This means that any data that was issued and drawn before the fence, can now be overwritten. Any data that was issued AFTER the fence may still be in process by the card, and thus isn't safe.

This is in contrast to regular vertex arrays, where the data is synchronous -- once the function call returns, the GL has done what it needs with the data (say, copied it into a buffer) and you can immediately dispose it. This causes unnecessary copying, or unnecessary synchronization, which is why VAR/Fence were invented.

The modern version is ARB_vertex_buffer_object, by the way, and that's implemented by more vendors than just the vertex array range, so perhaps you should look at that extension instead? It's defined to let the driver do the synchronization without you worrying about it, and as such is a little friendlier to an application-level programmer. Whereas I, as a systems programmer, felt right at home with VAR/Fence :-)

Note that if you double-buffer streaming data in a single large buffer, you only need a single fence that you can test and immediately set when you reach the middle/end of the buffer.

Bobo.Bobo
05-28-2004, 01:39 PM
I think I got it.
Thanks!