GUI implementation; is this a good idea?

Hi there,

I am porting an application with a graphical user interface. The new implementation will use GLFW for windowing and input and OpenGL for rendering. I was looking at the requirements defined by the GUI system and I am unsure about the best way to implement the rendering logic with OpenGL. Here is a quick overview of what I need:

  • The GUI consists of several components (buttons, labels, progress bars, etc)
  • Each component has an axis aligned bounding box and is not allowed to render outside of that box
  • Components will render text, images, colored quads, triangles, circles, etc
  • Changes happen very infrequently. Depending on user input, it could happen that the application does not need to re-render anything for several seconds

The bounding boxes I wanted to implement with scissor rects. This however will make it difficult to batch draw calls. Changing textures (for pictures and text rendering) may also happen frequently.
So I was thinking: Maybe I just render the entire GUI into a texture. This will happen only as necessary. Then I will render this texture across the GLFW window to display the GUI. Maybe this way the frequent glScissor calls and the lack of batching isnt that much of a problem, right?
Is this a good idea? Is there a better solution that I am not aware of? Any suggestions?

Thanks in advance!

Typically, you’d want to constrain rendering not only to the component’s bounding box, but also to the bounding boxes of all of its ancestors. This is an issue mainly for scrollable views; in most other cases, a component’s bounding box will be contained within that of its parent.

In which case, you can cache intermediate results using render-to-texture.

[QUOTE=Cornix;1288094]
The bounding boxes I wanted to implement with scissor rects. This however will make it difficult to batch draw calls.[/QUOTE]
You can implement scissoring in the fragment shader, adding a per-vertex attribute containing either a bounding rectangle or an index into an array of bounding rectangles.

Another option is to start by rendering the bounding box hierarchy into the depth buffer, then render the components’ contents with glDepthFunc(GL_EQUAL).

The GUI library is already calculating bounding boxes up the ancestor tree, components which are completely occluded arent even passed to the rendering function.

That was my question. Is it a “good” idea to use render-to-texture for GUI’s? Would I actually “win” anything or is there some cost I am not keeping in mind?

[QUOTE=GClements;1288103]You can implement scissoring in the fragment shader, adding a per-vertex attribute containing either a bounding rectangle or an index into an array of bounding rectangles.

Another option is to start by rendering the bounding box hierarchy into the depth buffer, then render the components’ contents with glDepthFunc(GL_EQUAL).[/QUOTE]
That is a great idea! I would have never thought about the depth-testing method.
So I would have a first pass of “rendering” bounding boxes into the depth buffer, giving each bounding box a unique ID. Then I render the actual components with glDepthFunc(GL_EQUAL) and a Z-value for position equal to the ID of their respective bounding box. I guess precision could become a problem since depth is usually a float, right? Could the Z-coordinate be an integer while the X and Y remain floats?

Thank you very much for your answer!

I suspect that the only case where it’s likely to be worth using render-to-texture for a single component is for large amounts of text, where somewhere between a few hundred and a few thousand quads could be replaced with a single quad. It may be worth using it to cache entire top-level windows if the amount of variation between components makes it hard to batch them, so you end up needing to use a draw call per component.

The value stored in the depth buffer is usually a 24-bit unsigned normalised value. In terms of consistency, so long as you’re using identical inputs (vertex Z/W coordinates) with identical transformations (or no transformation), you should get identical results. Floating-point arithmetic has its quirks, but it is deterministic.

I was thinking of using render-to-texture for the entire GUI tree, all components at once. I wouldnt do it for individual components. With my current concept I may end up needing several draw calls per component.

Thanks again for your answers.