NVIDIA bug in glDeleteSync

I’ve found nasty problem in NVIDIA implementation of ARB_sync.

I have two threads A,B:

Thread A:
fence = glFenceSync
glFlush()
send fence to B

Thread B:
get fence from thread A
glWaitSync(fence, 0, GL_TIMEOUT_IGNORED);
glDeleteSync(fence);


glFinish();

The problem is that glFinish takes 12 seconds ! (every time) Framerate 0.08 fps is not very good.

Tried following without success:
Thread B:
get fence from thread A
glWaitSync(fence, 0, GL_TIMEOUT_IGNORED);
glFlush();
glDeleteSync(fence);

This fixes the problem:
Thread B:
get fence from thread A
glWaitSync(fence, 0, GL_TIMEOUT_IGNORED);
glFinish();
glDeleteSync(fence);

or
Thread B:
get fence from thread A
glClientWaitSync
glDeleteSync(fence);

or
not calling glDeleteSync(fence) at all - then memory leaks 280bytes (of course)

It looks like the WaitSync is enqueued, deleteSync deletes the Sync object immediately and then WaitSync timeouts with platform depend 12 seconds timeout.
I double checked the specification of glDeleteSync:

If the fence command corresponding to the specified sync object has
completed, or if no ClientWaitSync or WaitSync commands are blocking
on <sync>, the object is deleted immediately. Otherwise, <sync> is
flagged for deletion and will be deleted when it is no longer
associated with any fence command and is no longer blocking any
ClientWaitSync or WaitSync command. In either case, after returning
from DeleteSync the <sync> name is invalid and can no longer be used
to refer to the sync object.

I think the observed behaviour must be a bug.

tested with: GeForce GTX 260, Windows XP x64, drivers 258.96

NVIDIA, please take a look at it, thanks.

I agree this is a bug. You can work around the issue by deferring the glDeleteSync() until after you know the wait has completed. In the mean time we’ll investigate a driver fix.

Thanks.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.