Request for Advice - Needing two OpenGL windows to display to...

Hello,

For an Oculus Rift project I need to have two windows open.

The first window is the Oculus Rift window which needs to have the highest framerate possible.

The second window needs to be displayed on screen and it is not a priority window but needs to display what is going on in the first window.

I read up on creating multiple contexts with OpenGL but from what I gather that kills framerate.

I was thinking that somehow I can do a glBltFrameBuffer from the Oculus Rift window’s output to a smaller display that goes on the user’s main display but is that the best way to go?

Thank you for your time.

OpenGL is not responsible for creating windows. That is a platform dependent task which is usually done by a third party library. And just having multiple windows is not killing frame rates either. Only if you want to use VSync you might get a problem.

Hello,

I understand that OpenGL isn’t responsible for creating windows; I was looking for advice on having multiple contexts versus having one context, copying from multiple contexts/framebuffer objects, etc.

Poor wording on my part, but the question remains.

I will need to look into whether the Oculus Rift needs vsync or not; that may be a sticker.

It’s going to depend on the performance characteristics of your specific GL driver, so look to your GPU driver vendor for their best recommendation.

But generally speaking, I’d prefer one context per thread per GPU. Context swapping is expensive and you want to avoid it when possible. However, you should bench it yourself and verify/refute that on your HW/driver setup. Try creating two contexts in a thread and alternating between them N times with MakeCurrent and rendering a small batch. Then compare with the performance of rendering a small batch 2*N times to the same context.

[QUOTE=Dark Photon;1262232]It’s going to depend on the performance characteristics of your specific GL driver, so look to your GPU driver vendor for their best recommendation.

But generally speaking, I’d prefer one context per thread per GPU. Context swapping is expensive and you want to avoid it when possible. However, you should bench it yourself and verify/refute that on your HW/driver setup. Try creating two contexts in a thread and alternating between them N times with MakeCurrent and rendering a small batch. Then compare with the performance of rendering a small batch 2*N times to the same context.[/QUOTE]

Thank you! I’ll get started on this.

One last question; what if I am limited to one context if, as an example, the rendering performance becomes an issue?

Have you seen/implemented any type of application where you have one rendering context and somehow was able to copy output from one window to another?

I’d imagine the second window would suffer a hit but in my case I am strictly concerned about the primary window’s performance remaining as high as possible.

Just curious…

Don’t know what GPU hardware you’re running, but take a look at NV_copy_image. Also, IIRC NVidia’s CUDA may have some facilities for copying data between GPUs/contexts – not sure. Also, I think there’s a good chapter in OpenGL Nsights that talks about efficient buffer transfers and (IIRC) talks about copying between devices; recommend you websearch for it.

Thanks, I will check this out.

Create 2 opengl windows with a shared context, then just render the first window to a texture so you can draw on the second. You’ll need to disable vsync on the second window or draw to the front buffer otherwise you’ll be limited to refresh rate/number of windows.