I’m using OpenGL timers to benchmark some of the render stages in my program. Each render stage has its own query object and is rendered independently of the other stages.
My code looks something like this:
GLuint queryObjectFirst;
GLuint queryObjectSecond;
glGenQueries(1, &queryObjectFirst);
glGenQueries(1, &queryObjectSecond);
while(true)
{
glBeginQuery(GL_TIME_ELAPSED, queryObjectFirst);
... Render first thing ...
glEndQuery(GL_TIME_ELAPSED);
glBeginQuery(GL_TIME_ELAPSED, queryObjectSecond);
... Render second thing ...
glEndQuery(GL_TIME_ELAPSED);
glfwSwapBuffers();
GLuint timeDifferenceFirst;
glGetQueryObjectuiv(queryObjectFirst, GL_QUERY_RESULT, &timeDifferenceFirst);
GLuint timeDifferenceSecond;
glGetQueryObjectuiv(queryObjectSecond, GL_QUERY_RESULT, &timeDifferenceSecond);
}
Strangely, some timer values are not available after the swap buffers call. Calling glGetQueryObjectuiv with the GL_QUERY_RESULT_AVAILABLE parameter often returns false and I must run a while loop until it’s true. I even tried calling glFinish() before retrieving the timer value, but it made little difference. The timer values themselves appear to be mostly accurate, but I wish I didn’t have to slow down my program with the while loop.
I’m fairly confident that I’m otherwise using the timers correctly because I don’t have any warnings when running with a debug context. I’m using an NVIDIA GTX 460 with driver version 320.18.