Hi, I'm trying to determine the time it takes from the moment I call a glQueryCounter on the CPU and the moment it is actually executed on the GPU. The reference pages say that when you use glBeginQuery with GL_TIME_ELAPSED, the time counter is set to zero. Is that exactly when the glBeginQuery is called on the CPU? Or is it set to zero once the call is executed on the GPU? If the latter is correct, then using glBeginQuery and glEndQuery will return the same result as using the difference between two glQueryCounter; which also means I can't measure the time between CPU call and GPU execution. If this is correct, is there any other way of measuring that delay?
Normally, an OpenGL application would look like this:
Code :CPU: A------B--------------- GPU: ------A---------------B
A is glBeginQuery, B is glEndQuery.
The CPU calls A, then calls a bunch of other opengl calls and finally calls B. Meanwhile, the GPU is still taking care of opengl calls previous to A and the opengl calls between A and B take more time to be executed in the GPU then they took to be called in the CPU. This means CPU time and GPU time are unsynchronized. However, GL_TIMESTAMP and GL_TIME_ELAPSED can be used to measure the time distance between A and B on the GPU. So right now I can measure the CPU time between A and B with a chrono::high_precision_clock and the GPU time with a query, but how can I get the time from the CPU A to the GPU A?