How do I draw icons in Core Profile.

Hello All,

I have some code that is basically used to draw thousands of screen space/aligned icons using glBitmap calls that have been compiled into GL Display lists. However, I want to port that code to another application that is using the GL core profile (and thus both glBitmap and Display Lists are unavailable). I want to know the best way of accomplishing this? As far as I am aware, I know the following two options are available:

  1. Place each individual quad (as 2 triangles) into a VBO and draw it in ortho (with the correct modelview/projection matrices that will scale and align the quads in screen space) and then just do simple texturing inside the fragment program. In this case you can either use a single VBO per quad or a single VBO containing all quads and drawn with indices but either way you have to draw each quad individually as you need different matrices per quad. This leads to a state change per quad causing a hit in performance. I would like to avoid that performance hit.

  2. Place all the quads into a single VBO (as a collection of thousands of points) and then use vertex and geometry shaders to create screen scaled/aligned quads and the perform texturing in the fragment shader. This has the advantage of a single GL state for all quads (as you can transform the points with the same matrices as the geometry shader is doing the alignment) but it has the disadvantage of requiring a geometry shader (there are concerns on my team about global support across all platforms/hardware). My team would prefer to avoid using geometry shaders (unless absolutely necessary).

Is there another way to accomplish this task? It seems to me like a common requirement for applications? The need to draw icons in a quick and efficient fashion? What am I missing?

The most straightforward option is basically option 1 but using screen-space coordinates rather than giving each quad a separate matrix. That way, you can render them all with a single draw call.

Calculating the vertex coordinates in the client will be far more efficient than using one draw call per quad.

An alternative to option 2 is to use instanced rendering rather than a geometry shader, but that has roughly the same compatibility.issues as using a geometry shader.

A more compatible (but probably less efficient) version of option 2 is “DIY instancing”, where you store the vertex data in a uniform array or texture which is indexed using expressions based upon gl_VertexID.

Not really an option as you would need to recalculate the screen-space coordinates each time the camera is moved? I neglected to mention that is a requirement. (Or am I missing something?)

[QUOTE=GClements;1278961]
An alternative to option 2 is to use instanced rendering rather than a geometry shader, but that has roughly the same compatibility.issues as using a geometry shader.

A more compatible (but probably less efficient) version of option 2 is “DIY instancing”, where you store the vertex data in a uniform array or texture which is indexed using expressions based upon gl_VertexID.[/QUOTE]

I considered instancing but came to the same conclusions.

Part of me thinks I am just missing something as it just “seems” there just has to be a way to do this? But perhaps it is just not a priority in other/most applications.

In which case, specify the vertex positions in “world” coordinates (or whatever coordinate system comes before the camera transformation), so that all icons share the same transformation matrix.

And in the worst case (where each icon moves independently each frame), updating four vertices on the client will still be faster than another draw call.

Depending on whether icons can go partially off screen and need clipping or not, and their maximum size in pixels, you might be able to draw the icons as points. That way you only need a single vertex.

You definitely don’t need a geometry shader, and instancing (the API way) is not a good idea with just four vertices per instance. Many GPUs cannot combine instances in the same warp/wavefront. Worst case you end up only using 1/16 of the SIMD width.

[QUOTE=GClements;1278968]In which case, specify the vertex positions in “world” coordinates (or whatever coordinate system comes before the camera transformation), so that all icons share the same transformation matrix.
[/QUOTE]

That is basically my option 2 but without the geometry shader?? But Option 2 only works because they are points. I don’t think it works with geometry?

If I make multiple axis aligned quads (pick any consistent axis, say X-Y aligned) and arrange them in 3D world space and create a VBO, will the same transform be able to transform (scale and alignment) all of them into screen alignment? Seems to me the scale won’t work? Depending on the camera angle and distance from the camera to the object, won’t the transform need to change (to handle the scale)? Meaning a state change and drawing each icon independently?

[QUOTE=Firadeoclus;1278968]
Depending on whether icons can go partially off screen and need clipping or not, and their maximum size in pixels, you might be able to draw the icons as points. That way you only need a single vertex.[/QUOTE]

You mean using GL_POINT_SPRITE? Is that part of the Core Profile? I thought it was depreciated? I didn’t see it on the reference card?

So you’re actually positioning the sprites in 3D?

In that case, if you don’t want to use a geometry shader or instancing, you’ll need to fake instancing in the vertex shader. Anything that can be done with instancing can be done without it, at the cost of increased memory; instancing lets you draw MN primitives at the cost of O(M+N) memory rather than O(MN).

Set all four vertex positions to the same position (the sprite’s 3D “origin”), transform the vertex position normally, then adjust the transformed position, using either gl_VertexID/4 or the vertex’ texture coordinates to determine which of the corners you’re dealing with.

If the icon size can vary, each invocation of the vertex shader needs to be able to figure out the size of the quad to which it belongs. If you only have a small number of sizes, you can use a separate draw call for each size, with the size in a uniform. Otherwise the size can be stored in an additional attribute, or a uniform array or texture (an attribute may be faster but requires the data be repeated for each vertex of a quad).

[QUOTE=PickleWorld;1278981]
You mean using GL_POINT_SPRITE? Is that part of the Core Profile? I thought it was depreciated? I didn’t see it on the reference card?[/QUOTE]
In the core profile, points are always sprites; there is no fixed-function pipeline; the fragment colour is always determined by the fragment shader.

The main issue with point sprites is that the set of supported sizes is implementation-dependent. Only 1x1 points are guaranteed to be supported. See glGet() with GL_POINT_SIZE_RANGE and GL_POINT_SIZE_GRANULARITY.

Yes. Sorry, sometimes when you have an issue clear in your head and try to explain it you leave out key details. My bad.

I basically have a bunch of 3D objects in a scene that are drawn with icons. We used to use glBitMap and glRasterPos3D to draw them (via Display Lists).

[QUOTE=GClements;1278984]
Set all four vertex positions to the same position (the sprite’s 3D “origin”), transform the vertex position normally, then adjust the transformed position, using either gl_VertexID/4 or the vertex’ texture coordinates to determine which of the corners you’re dealing with.[/QUOTE]

Now that is a very interesting suggestion. I will give this a try :wink: Thank-you!