Rendering lots of regularly spaced hexagons

For my app I ideally want to be able to render a 2048x2048 field of single color (no gradients or other snazziness) 2D hexagons quickly. I’ve tried a few different approaches:

-Immediate mode with GL_POLYGON
-Display lists with GL_POLYGON
-Display lists with GL_TRIANGLE_STRIP
-Indexed VBO with GL_TRIANGLE_STRIP

The last method is pretty fast, but I hit the memory barrier on my video card (and the ati linux drivers are too sucky to swap out to system memory when VRAM is full). I can only render 1100x1100 hexagons before I get a GL_OUT_OF_MEMORY error. 1100 * 1100 * 6 vertices * 2 dimensions = 14,520,000 floats and the color data means 1100 * 1100 * 6 * 3 colors = 21,780,000 unsigned bytes, which is just too much data. (The actual amount of data is somewhat less than that because I remove redundant vertices, but 1100x1100 is still the hard limit I hit)

But this can’t be the smartest rendering method. I like the speed VBOs offer, but because the hexagons are predictably spaced, it seems redundant to store the exact position of the vertex of every hexagon on the card. I looked around for some OpenGL extension that would do compression on the vertices but couldn’t find one.

What should I try next? My current idea is to render a single filled black hexagon to a texture, and render a field of quads (by quads I mean two triangles in a triangle strip) using the texture. Changing the color of the quad then should change the color of the hexagon in the texture.

I was thinking I could even share vertices between adjacent same colored hexagon-quads because even though you’d normally want the next quad to have different texture coordinates, having them reversed has no effect since a hexagon is symmetrical.

Someone on IRC suggested I could use shaders but I’m unsure how and have never used them before.

Ideas?

There are many ways to reduce vertex size in your case with vertex shaders (off the top of my head I can think of a few to get it down to 4-8 bytes)

However, what you are doing seems a little strange, can you perhaps explain:

  • What your app is for/doing?
  • Are these hexagons always evenly spaced and in the same position?
  • How often does your data change, that you need to re-draw the scene?
  • What hardware level are you targeting?

Do you really need to have all 2048x2048 hexagons on the card at once? Try a system like quad trees to subdivide your list of hexagons into easily identified sections. You can then use frustrum culling to remove the unwanted leaves of the tree. That way you’ll only render what you can see on the screen (plus a bit extra).

Also, you don’t need to define all 2048x2048 in memory. Rather, you could define a set number and then alter just the colours of the visible hexagons.

It’s for a hexagonal plot. All 2048x2048 have to be able to be visible at the same time. Normally for plots I could just render the whole thing to a texture once and reuse that, but I think it’d be cooler for the user to be able to drag and resize the plot in real time.

The hexagons are always tightly packed and the same size. The data doesn’t change often – as things are now, but I’m planning in the future on having what’s plotted change as data comes in, so in the future it might be more variable.

Ideally Geforce3+ hardware is what I’m going for. Geforce2 hardware level is desirable though.

What vertex shader tricks are you thinking?

All 2048x2048 have to be able to be visible at the same time.
What you’re trying to do just isn’t reasonable.

Normally for plots I could just render the whole thing to a texture once and reuse that
How exactly is that going to work? A GeForce 3 may support 4096x0496 textures, but such a rendering would be virtually indistinguishable from drawing 2x2 blocks of color. Unless you’re really rendering 2048x2048 hexes at a resolution big enough to be able to tell that these are hexes and not just dots on a field, there’s no reason to do it the way you’re doing it.

something like
create a DL of a patch of say 100x100 hexagons
for (…)
{
glPushMatrix();
glTranslatef( … );
glCallList(hexagonpatchID )
glPushMatrix()
}

For a packed vertex format I had something like this in mind.

Attribute 0
[16-bit x pos, 16-bit y pos]

Attribute 1
[8-bit red, 8-bit green, 8-bit blue, 8-bit edge index (values from 0-5)]

= 8 bytes/vertex

Then in the vertex program multiply the x and y position values by some float value to get the center location of each hexagon.

Then, use the edge index value to lookup into a uniform array that have the x,y offsets for each hexagon edge. Then add the offset to the current x,y central position.

Then output the color value.

Psudo vertex shader code:

//Get the central position
vec3 outPos = attrib0.xyz * vec3(FLOAT_OFFSET_UNIFORM);

//Get the hexagon offset
vec3 offset = hexagonEdgeOffsets[attrib1.a];
outPos = outPos + offset;

//Output the color
outColor.rgb = attrib1.rgb;
outColor.a = 1.0;

You could pack this furthur if you have a limited range of 256 colors (use a color index offset) and are willing to render in 256x256 blocks eg.

Attribute 0
[8-bit xpos, 8-bit ypos, 8-bit color index, 8-bit edge index]

= 4 bytes vertex

Originally posted by Korval:
[QB] [quote]All 2048x2048 have to be able to be visible at the same time.
What you’re trying to do just isn’t reasonable.
[/QUOTE]Really? I already can do 1100x1100 and the rendering code isn’t even that clever yet.

Unless you’re really rendering 2048x2048 hexes at a resolution big enough to be able to tell that these are hexes and not just dots on a field, there’s no reason to do it the way you’re doing it.
At what threshold should I start rendering it differently? 2048x2048 is the maximum size I need to support, but it is not that size everytime. Furthermore, I’d have to deal with maintaining code for two different ways of rendering the graph. Not only that, but the user ideally will be able to arbtirarily resize the graph and zoom – which means adding code to switch between modes.

I’ll probably try the vertex program technique later, and my hexagon texture idea. I’ll report back if I find something that works to quell the naysayers :slight_smile:

korvals right, 2k x 2k evenly spaced objects on screen at once == only one or a few pixels each at most, this will lead to horrible aliasing issues.
post a screenshot of what u have.

Zed - I suspect that Jengu needs to change the colour of the hexagons within the grid - so a DL won’t work. But using a streaming VBO should.

I wonder, are the majority of hexagons to have different color?

If not, couldn’t this be done using an RGBA texture with a white hexagon in RGB and transparent alpha for the non-hexagon parts? Combine a few of them into a larger texture and tile as many as are needed. Render with whatever the major color is as vertex color.

Then for the few remaining, just slap quads of the single-hexagon quads on top (without depth testing, obviously).

If that fails, perhaps another partitioning scheme could be static vertex and index VBO’s with N hexagons, but a dynamic color VBO, and tile-render that as needed?

rgpc yes i was wrong to suggest that, i missed the part about colors in the original post.
but it still doesnt change to display 4million meshes of a few or 1 pixel onscreen at once is gonna have terrible aliasing.
solution s are
A/ uses LOD meshs
B/ draw fewer meshes
C/ use a texture (with mipmapping)
etc

If not, couldn’t this be done using an RGBA texture with a white hexagon in RGB and transparent alpha for the non-hexagon parts? Combine a few of them into a larger texture and tile as many as are needed. Render with whatever the major color is as vertex color.

That’s my next thing to try, minus the combining part; that would complicate the code a bit, I’m hoping just the texturing will be sufficient.

How “regularly spaced” are your hexagons? Do they each fit into a square in a huge grid and the squares don’t overlap?
This can be done with one quad, multitexturing and/or a simple fragment shader.
Generate a texturemap with each pixel the color of one of your hexagons and use it as color ( NEAREST interpolation ), a second texture with your hexagon shape as a tiled texture.

2048x2048 tightly packed hexagons will be 1-pixel each at 2048x2048 resolution !!! So clearly you need to do some sort of LOD over here. Like the old saying goes “The best polygon is the one that’s never drawn”.
Forgetting the aliasing issues, there will be a lot of overdraw in your case. Although it will not effect ur fragment fill rate but you will be consuming vertex throughput which is your bottleneck.
In essense you are drawing a regular hexagonal grid. Read below …

Using an extremely simple quad-tree spaced structure where each leaves (of the quad-tree) represents a draw call that draws a certain number of hexagons can greatly simplify your situation and provide you decent performance. Each leaf can be rendered in one draw call. In an old thread on this very forum it was concluded that a batch size of a maximum of 64k vertices in each draw call is optimum for both nvidia and ati. Since your quad-trees will be similar you can use geometry instancing (but that might not be available for your target hardware). Furthermore, you need just one vertex buffer equivalent to the size of your leaf node and then you can re-use that everywhere, translating it to the appropriate place and drawing (your leaf mesh is centered around origin 0,0 in 2d). Translation can be done entirely in the vertex shader, so you will get optimal batching, since you won’t be modifying the transformation matrix. Assuming no other VBOs in your application, you need to bind the vertex buffer just once.
Since your leaf mesh is in object space and you are using transformations (using vertex shader) for transforming it to world space, therefore you can represent it entirely using unsigned short. BEWARE, uchar(s) might be an option as well but if i remember correctly they gave poor performance.
Hope this helps.

Originally posted by def:
How “regularly spaced” are your hexagons? Do they each fit into a square in a huge grid and the squares don’t overlap?
This can be done with one quad, multitexturing and/or a simple fragment shader.
Generate a texturemap with each pixel the color of one of your hexagons and use it as color ( NEAREST interpolation ), a second texture with your hexagon shape as a tiled texture.

I think his application is drawing a hexgrid as it is used in many paper-based role-playing games (like this ). In that case the hexes are packed tightly, and I don’t see how the detail texture approach would work.

Given that you want to do GF2/3 hardware fragment shaders are out, but that would probably be the highest quality and simplest solution, as it would allow you to draw arbitrary precision hexes in the shader.

Given that you want to do GF2/3 hardware fragment shaders are out, but that would probably be the highest quality and simplest solution, as it would allow you to draw arbitrary precision hexes in the shader.
What’s the minimum hardware for those?