Clear OpenGL command queue

There is any way to erase all the pending commands send to OpenGL?

For instance, I am rendering a high definition scene (0.5 FPS or lower) but when the application gets a mouse event, it switch to low definition (25 FPS or higher). I want to cancel the render of the current frame for not to wait until it ends, removing all the pending primitives at the queue.

I tried to render the scene in small parts looking for the user interaction after each part, but glFinish is needed after each part. That decreases dramatically the performance.

Any idea?

Why do you need to call glFinish after each part? By calling glFinish, you stop and wait for OpenGL to finish everything before continuing.

Because if not, all the geometry is send to OpenGL and I have to wait until all is rendered. Notice that sending the primitives to openGL is asynchronous and we only send 700 quads aproximatly. We use blending and a fragment shader for volume rendering.

Try glFlush() instead of glFinish : the spec says that glFlush triggers the rendering of pending commands (without guaranteeing that they have been completed when returning).
So at max you have a single batch of commands still pending, but would give better performance than glFlush().

There is no way to abort what you have already sent.

glFlush would send data to hardware but not necessarily block so you could still stuff the queue and increase latency.

The only acceptable way I can think of doing this is to manage the contents of the command queue using fences. If you send data and keep a running track of fences before you send any more, perhaps passing through your event loop once then then you could stop sending data. You’d need to ensure that you had a good chunk of data between fences. This is where glFlush comes in, you want to call it before you go around your loop. The fence returns when a batch of data is done rendering and you use it to ensure you only have a couple of groups in the queue, you use glFlush to make sure that graphics gets busy when you’re ready to do the event handling.

Once you send drawing commands to the GL there’s no way to “take it back”.

If I understand correctly, your problem is that the UI is sluggish in the event that you have sent (HI RES) drawing commands to the GL and the user is messing with the UI.

In the event of user input, perhaps you can flag your shader to discard all fragments, effectively doing much less work.

How do you do that when you’ve already sent your shader and attribs?

An unreliabe hack might abuse mislabeled VBOs of some sort but it’d be very naughty and no guarantee, and could potentially slow a well written app even further or plain not work.

Anyhoo, I strongly believe that fences to manage the number of fenced data sets sent to the pipe at any one time is the way to go. This is actually a great example of where fences can be used to good effect.

Change the desktop resolution, and catch the exception, which launches the render thread again.
:wink:

You can also try to close your window… :wink:

No really, there is no “legal” way to do that

Phew, yeah…I don’t know where the hell that idea came from.

Just an idea, but could you run the OpenGL visualization in a separate thread (inside a “child” HWND view), while handling the main (frame) window and the message pump from the main thread of the app?

Sure, it could still take 500ms to complete drawing something, but at least the UI would be responsive as usual during the time.

You could have your rendering and user interaction in seperate threads.

Sounds to me you need to break the rendering down to small batches and also run user interface and graphics in separate threads.

In the graphics thread, the RenderScene should be something like this.

void RenderScene()
{
bool firstEvent=false;
glClear();

for(i=0; i<totalBatchs; i++)
{
   if(user_event_occured==false)
      render_HighQuality(i);
   else
   {
       if(firstEvent==false)
       {
           firstEvent=true;
           glClear();
           i=-1;     //Restart
           continue;
       }
       else
           render_LowQuality(i);
   }
}

SwapBuffers();
}

or even simpler

void RenderScene()
{
glClear();

if(user_event_occured==false)
{
   for(i=0; i<totalBatchs; i++)
   {
      render_HighQuality(i);

      if(user_event_occured==true)
         break;
   }
}
else
{
   for(i=0; i<totalBatchs; i++)
   {
      render_LowQuality(i);
   }
}

SwapBuffers();
}

V-man, that’s similar to my suggestion but broken I think, you don’t have a sync mechanism, there’s nothing to block graphics and make sure input latency is low. You’ll just fill the command queue as you spin in your polling loop for events and still go out to lunch completing the rendering when you finally get an event probably after you dispatch. Now it may be that you’d block eventually but no guarantee, only a second swap does that. Your suggestion is better than nothing but not without problems.

You either need to issue a heavyweight glFinish after each batch or use a more efficient method like fences as I suggested.

As for polling in a separate thread it doesn’t work unless you limit the contents of the command queue, otherwise your render thread must complete anyway before it can visually respond to any input regardlessof how efficient the input is processed. The best way again is to use fences to make sure some earlier batch is completed before issuing another (this would allow you to have multiple batches in the queue but not unlimited data).

Hi Dorbie, yes the first one needs a slight fix but I think the idea is visible.
I was trying to put the idea of using threads and aborting rendering together. The people above me just said “use another thread for the UI”

You don’t need to use fences if you know rendering will be slow.
You can test performance every 100 frames and store that info somewhere. Perhaps test rendering performance for each batch and storing it in mybatch.TimeToRenderInMilliseconds

I admit using fences is the superior way to do it.