Designing a

I’m in the process of designing some GUI widgets that I need specifically for my game. Some of the features I’m implementing are texture mapped widgets, and alpha blending among the non mundane. My main question though is in regards to event handling. One way I thought of how to do things was to have the user (most likely only me) register callback functions for each event they were interested in processing. A pointer to their function would then be stored and called when the proper event is triggered. But in terms of triggering the event…the first way to implement it that came to mind to me was something like this:

In the OpenGL mouse handling code, check for, e.g., a left mouse button down event. For each widget on screen, call their left mouse button down event, passing in the mouse coordinates. If the mouse coordinates lie within the bounds of the widget, then that event has been triggered, and the user’s callback code gets executed and returns the value of that callback. The process is much the same for every other callback.

Now, it works, but I was wondering if there was a way to make this way simpler. Ideally, it would be cool if when a widget or dialog box was on the screen, if the dialog box (which contains all the widgets) could take over proecssing the mouse input, and keyboard input in the case of the edit box. By taking over the input, I could in the class have it check for the proper events and call teh right methods, instead of having the user get so involved in the process. Then, after the events have been processed, it could relay the event message or propogate it in some say so that just in case the user didn’t want to lose total control of their input abilities when dialog boxes are on the screen, they could still receive the input notifications.

Just conceptually speaking (I’m not asking for source code), is there a way to at least get close to what I’m trying to achieve, or is the way I’m currently approaching it “as good as it gets”? Perhaps there’s a better method altogether. Thanks for your help, I greatly appreciate it.

One last note, I know how to do it in DirectX: I can use have the user pass in a pointer to the input device (roughly speaking) from which I can automagically poll the mouse. Is there a similar way?

uh, so apparently I didn’t finish the subject line.

“Designing a GUI” is what it should have been =) Sorry about that, yay for sleep deprivation.

Are you into OOP at all? If you are, look up the observer pattern and event publisher/subscriber. Javas swing and AWT use this model for event handling. It’s a very clean way of doing it. I don’t really see what this has to do with OpenGL though…

The whole thing is done in C++, so yes to OOP =) As for relating to OpenGL…my question is basically how do I take over input control? In DirectX, there’s this input class that I just instantiate and would pass to my GUI, allowing it to use that class pointer to muck with the mouse/keyboard. But in OpenGL, you use glutKeyboardFunc, glutMouseFunc, etc. to specify a function to call for the appropriate input event. I’m wondering if there’s a more elegant way of doing things other than making static functions in a GUI class, and having the glutKeyboard/MouseFunc methods hook into those.

It sounds to me like you’re confusing OpenGL and GLUT. GLUT is not part of OpenGL. It is a utility kit written to make cross-platform OpenGL apps eaiser. I would suggest that you abandon GLUT and just use the straight Win32 API or use MFC. Then you can handdle mouse clicks in the appropriate window procedure.

Your approach is one way, another is to have a hierarchy of widgets, starting with a base class.

The base class might have some event procesing capability. Anyway this or a derived class has the ability to be a container, i.e. hold other widgets. This way when you build your GUI the event gets passed to the top level and the internal event handlers, if it’s a container it passes the event down to it’s children. That way you can put the bound box smarts where you like, in the container or in the button and just brute force it. Maybe do it in both and the complex button shapes can go on to do more refined tests.

Remember that you need to track input focus.

As the mouse moves you may want to pass the position to the widget code and some may highlight based only on mouse position, depending on their properties.

When the button is pressed you may want to stick input focus to that widget but continue to bounds test and highlight accordingly.

Finally when the button release happens you want to either release input focus or trigger the event based on current position.

This is the behaviour everyone has come to expect under most circumstances. This is relatively simple, but can be tricky if you don’t plan for it.

[This message has been edited by dorbie (edited 01-13-2002).]