What are multiple windows good for?

Silly subject, perhaps. The real question is:

I want to add support for multiple windows in GLFW , and I want to know what they will be used for by real applications in order to make the GLFW API as useful and powerful as possible.

Just adding support for multiple windows is not difficult. What’s more interesting is questions like support for “child windows”, shared lists, etc. Here are a few more specific questions:

  1. If “child windows” are useful, what kind of functionality do you expect? Input event propagation from child to parent? Parent closed => child closed? Window Z-ordering?

  2. Is shared display lists / texture objects a useful feature? (This one seems very useful - and yet something that GLUT lacks)

  3. Is it OK that “multiple windows” and “fullscreen window” are mutually exclusive? (This would probably help implementations alot on some platforms)

If someone could point me to some actual OpenGL applications that use multiple windows, I would be happy.

Multiple windows are useful in a lot of applications, such as cad and a lot of simulations (3dview+planview).
My own approach is to allow creation of as many windows as you like, and as many ‘views’ within each window as you like (analogous to viewports, except each port has its own render states). You should translate any user input messages from the window to which ever view has ‘focus’.
I also use a single gl context, which I make current to each window as and when I render to them. This saves on state save/restores.
I don’t recommend using seperate threads for seperate windows either, as context switches will mean state switches on the graphics card at the same frequency as the threads are switched between! So that’s not very efficient. One thread should look after all windows, stepping between them after each render has completed.
This is just how I do it, but it does work nicely.

1 Like

Thanks knackered! This was useful information.

You should translate any user input messages from the window to which ever view has ‘focus’.

Do you mean that input events should be sent to the “correct” window? I.e. check which window the event occured in, and use that window’s callback - in this case? That was how I planned to do it. I just happened to stumble accross the GLUT source, which posts input events to “parent” windows too - something that I don’t really see any use for (I mean, the GLFW user can manage this by assigning a single input event callback for all windows - for instance).

I also use a single gl context, which I make current to each window as and when I render to them. This saves on state save/restores.

This was a new concept for me - didn’t even know it was possible. It will complicate the implementation a bit (not much), but from an API point of view it’s very simple. E.g:

win1 = glfwOpenWindow( ... );

...

glfwOpenWindowHint( GLFW_SHARE_CONTEXT, win1 );
win2 = glfwOpenWindow( ... );

...

glfwSelectWindow( win1 );
glfwCloseWindow();
glfwSelectWindow( win2 );
glfwCloseWindow();

What I need is a way to manage the context(s) so that a context does not get destroyed until the last window that uses it is closed.

How do you manage the views? You have to reset the viewport etc every time you draw to a window then (since it’s part of the context)?

One thread should look after all windows, stepping between them after each render has completed.

Sounds very reasonable - that’s how I’d do it.

I also use a single gl context, which I make current to each window as and when I render to them. This saves on state save/restores.

This approach is not portable – if you remove a GL context from a window on Mac OS X you will also remove the rendered image.

Absolutely. Also, I have found that (under Windows), one thread per window works out. Sure you get a hit from the context switch and subsequent pipeline stall - but all in all, I can get more or less half the max frame-rate of one window when I have 2 (with equivalent fill-rate) etc.

I think the one thread per window is also anatomically neater as well as a little more portable.

Originally posted by OneSadCookie:
This approach is not portable – if you remove a GL context from a window on Mac OS X you will also remove the rendered image.

Remove a GL context? What do you mean? Making a GL render context NOT current to a window removes the rendered image from that window, on a Mac? Without you calling a swap? Without you being able to do anything about it in the paint message handler?
That’s one hell of a limitation of that particular OS, in my opinion.

Robbo:
I think the one thread per window is also anatomically neater as well as a little more portable

And since when does something being anatomically neater become relevant in realtime graphics? Your code should be as fast as possible, not as neat as possible. Seriously, the method of one-thread-for-all-windows coupled with a single shared render context benchmarks much faster in all the tests I’ve done on MS Windows.
The only time coding should sacrifice small amounts of performance over useability is in the API itself, ie. the interface mechanisms to your renderer, not the internal workings of it. This isn’t a beauty contest.

Yes but you have the overhead (more code) of having to manage a swap chain. In the context within which I work, I have in-process Active-X controls which run one render context on one thread. I don’t have any context management to perform at all or any “overall” structural render target chain.

Because it’s being done for you - many times per window per frame, rather than once per window using your own ‘swap chain’.

another reason for a single-threaded render approach: view-dependent attributes in your nodes (such as distance from near plane, am I visible etc.)…if 2 views are being rendered at the same time, then the code to manage those attributes becomes messy itself.

So, I’m a bit confused here. I have some research to do, obviously.

Originally posted by OneSadCookie:
This approach is not portable – if you remove a GL context from a window on Mac OS X you will also remove the rendered image.

Could you please check into this in more detail, Cookie?

Oh, and nobody has mentioned child windows. I would only be happy if I could exclude it from the API.

knackered, i like your approach. i am currently working on an application with multiple viewports, and i was looking for a nice way to avoid this nasty multiple-context-state-problem. your method seems to solve this in a very nice way.
with what a system have you tested it ? win2K,XP?
works this only with nvidia drivers or does it work allways with win32?(have you tested it?)

Knackered, I think you are considering only the simplest context in which multiple windows will be used, that being where you have a kind of passive MDI and a nice little ringlet going around rendering each one when it needs an update.

In our “serious” applications, it makes more sense for each window to be a single thread and a single render context, to avoid us having to sync render\update between all windows whenever something changes.

Each window encapsulates the behaviour of external imaging equipment (real-time) and is easier to deal with in this way.

I’m not going to rise to the bait and argue which of us does the most ‘serious’ applications, Robbo.

As for your assertion that my own method requires you to constantly render to all windows, well that simply ain’t true. Each view has a ‘dirty’ flag, which is marked true when the data the view presents has changed in some way (or the viewpoint has changed), only then is the render context made current to the window the view resides in and renders to it.
I’m sure that your multithreaded approach works fine for you, but in the applications I have to write (where I could be mixing opengl with direct3d on the same desktop), my single threaded approach solves many problems.

[This message has been edited by knackered (edited 01-14-2003).]

dont most ‘serious’ modelling applications just use one opengl window.
eg 3dmax blender etc.
this one window is split into various regions (eg 3d view, left side view etc) with glViewPort(…)
just something to think about

Remove a GL context? What do you mean? Making a GL render context NOT current to a window removes the rendered image from that window, on a Mac? Without you calling a swap? Without you being able to do anything about it in the paint message handler?

It’s merely a side-effect of having a modern composited window-system.

You don’t get repaint events when windows are moved or re-ordered; the window server has the image of what was in your window and handles re-compositing it appropriately.

OpenGL is in a lower-level layer even than the window system, basically the window server knows nothing about it.

If you want to keep the image in the window, you can readPixels() back and blit to the window before you remove the context.

That’s one hell of a limitation of that particular OS, in my opinion.

There’s nothing in any spec saying that your technique shoud work, so you’re relying on undefined behavior. That’s what’s often known as a “hack”. It’s only a limitation in the sense that your particular hack doesn’t work.

Hi again - thanks all for your response! Interseting what different approaches different people have taken.

Zed, I agree (single window = nice solution), but there are situations where multiple windows are preferrable. E.g.

  1. You do not know beforehand how many windows you will need (e.g. scientific plots)
  2. You want to have the possibility to manage each view independently (e.g. maximaize, iconify, place side-by-side etc). Especially useful when you have many views (>4).

Regarding GLFW, portability is way more important than functionality (that is why you can’t change video modes in GLFW without destroying the GL context, for instance). Therefor I agree with OneSadCookie that context sharing between windows is not a good idea to support in the GLFW API.

If I’m not mistaking, one thread per window might not be a very bad idea for future applications (= applications that will be used on tomorrows hardware), since it seems like new GL accelerators will have HW support for multiple threads/contexts (e.g. the PV10 has something like that, doesn’t it?). Then the context-switching overhead would be negligable, so not really an issue anymore.

Originally posted by OneSadCookie:
There’s nothing in any spec saying that your technique shoud work, so you’re relying on undefined behavior. That’s what’s often known as a “hack”. It’s only a limitation in the sense that your particular hack doesn’t work.

Mmm, no that is something that would be known as a “hack” on the Mac, obviously, but the spec for wglMakeCurrent on the little known “Microsoft Windows” system makes it clear that sharing render contexts between windows is perfectly legal and handled elegantly.
I’m midly interested in how the Mac handles things (I always considered Apple as the Atari of the OS world…full of good intentions and ideas, but being a bit of an anachronism) - from the way you describe the way the Mac OS handles windows (you give it a bitmap for it to display, while you render into that bitmap?) sounds pretty limiting to me - a backward step?..no clipping to overlapping windows means more rendering to do which won’t even be seen, unless the occluding window is moved.
It’s a good job that my renderer is designed for MS Windows only…

Marcus, I wouldn’t leave the child windows out if I was you. They are, in terms of usability, considered propably the best option for applications like modelling programs, where data is examined from several viewpoints.

Simple multiple windows totally suck at this, imagine having to minimize all your windows if you want to use another application. Sure, this can be worked around, but honestly it’s a job for the window manager. The one-window solution in 3ds/blender is propably there just because their dos/unix roots. It only adds trouble to the user, since in each program the views are managed slightly differently.

I know these sound like little things, but in an actual working situation where several programs are used simultaneously such things can make an enormous difference.

-Ilkka

JustHanging, I think the issue of multiple “viewports” of the same view and multiple windows are slightly different. Obviously, with a cad program (like MAX), you would have a single window\context split into several viewports. This makes sense because they are usually displaying aspects of the same environment\model.