Rendering "wrap-around" (infinite) world

Hello forum!

I am in the process of refreshing my OpenGL knowledge which is based on the good old fixed rendering pipeline from around 2000. As a private project I want to “port” a game that I wrote a prototype back then as part of an exercise we had to do for the graphics course which was part of my studies. In the meantime I am somewhere in the middle of the OpenGL SuperBible (which covers OpenGL 4.3), and have also read about the same amount of pages in the “OpenGL Programming Guide” (which also covers 4.3).

So much for my introduction.

I want to render my bitmap height field based (rectangular) world such that when the player “leaves” the world on one border he is automagically teleported to the other side of the field. Some of you might remember the good old “Virus” (aka “Zarch” on Archimedes computers) game - that’s exactly what I am trying to achieve :slight_smile:

What I did back then in my “prototype” was simple: for each frame I re-defined the vertices by doing simple “modulo calculation”: so given the current viewpoint of the player - the X/Y position in the underlying bitmap height field - I took the pixel values in a certain rectangular area around the current viewpoint and made sure that negative values or values which exceed the width of the pixmap are “wrapped around” by taking the modulo (making sure that negative values would become positive). Easy. Simple.

But I understand off course that glVertex is bad and the rage these days is all about vertex buffers (VBOs) and shaders.

So my question is more like “what are the best practises in OpenGL to implement such a “wrap-around” algorithm”, and not so much about “How do I implement (algorithmically) such a wrap-around/infinite world”.

Also note that my question purely aims at the “graphical representation part” of such a world. Game logic such as shortest paths, audio source positioning, physical simulation (keyword: “Bullet physics”) is all left as an exercise for later :wink:

I do see several possibilities with an OpenGL 4.x context:

A) I can continue to stream the vertex data, as in my prototype with glVertex, but this time using vertex buffers

I could choose a proper streaming technique (buffer invalidation, update the data of the fixed size buffer, … ) and determine the visible vertices as before (“simple modulo calculation”)

Advantage:

-only a limited set of vertices (determined by game logic and “tight clipping planes” which give the desired visual effect) needs to be processed by the pipeline

  • Simple module calculation

Disadvantages:

  • Streaming (but the data would really be small - we’re talking about a game which was done back in 1987 on 16bit hardware after all)
  • Feels like “that is not the way to do it in modern OpenGL these days”

B) I could upload the entire world vertex data (bitmap is going to be 512 x 512 pixels large, maybe 1024 x 1024, in that kind of magnitude) as STATIC data and do the “vertex wrapping” in my vertex shader

So I would e.g. define the current viewpoint (and the area to be seen around it in “width” and “height”) as parameter to the vertex shader, and given that when I detect that the “viewable area” would overlap with any world border, I would add the corresponding offset (world width and/or height) to the current vertex coordinates.

That would probably mess up the “order of the vertices” (in my previous simple “modulo” approach the vertex coordinates where “from top left to bottom right”, allowing me to draw them e.g. as triangle strip), but that could probably be sorted out by rending individual trianlges (which I might want to do anyway in the future, because each vertex of a triangle should have exactly the same colour), and we are not talking about many triangles anyway.

Advantages:

  • No streaming; the entire vertex data would reside in the graphic card memory
  • Shaders are good :slight_smile:

Disadvantages:

  • The order of vertices (as they are defined in the vertex buffer) would (could) be changed by shifting them around in the shader; but that might not be a problem after all when rendering individual triangles anyway (to be elaborated and tested)

C) Instanced drawing

Again I would upload the entire world data into a single vertex buffer, and then draw the world 9 times with “instanced rendering”: one time the “actual playfield”, and around it (shifted by world with and/or height in all directions) “instanced copies” of the world.

That could probably be optimised by taking the player position into account, and only enable instanced rendering in the area where really necessary (3 copies max, in case the player is in one corner of the world).

Advantages:

  • Very simple

Disadvantages:

  • Probably a bit “too much” to draw up to 9 (or 3) times as much vertices, just so the world would “wrap around” - but then again, given today’s gpu hardware that might be “peanuts”

So currently I tend for solution B), “wrapping around the vertices” when they fall into the “overlapping” viewing area (overlapping with the actual world border).

But maybe “streaming the vertices” as in solution A) would still be the way to go? For now I only want to be able to change the vertex colours then and when (when the world becomes “infected” by the virus - see again linked Wikipedia article), but in the future I might also want to alter the z-value of the vertices (“world deformation”) - that would “come for free” when I need to update the vertex data for each frame anyway.

So any thoughts about this? What is the “OpenGL way” to do this?

Thanks,
Oliver

The usual answer to this is try each and compare the options. Personally I would go this option B using a texture buffer (which can now be as big as 8Kx8K or more ) for the height map. With the number of vertices you are streaming you may not see that much difference using your old code (ie glVertex) on a modern graphics card.

How about sampling the translated height map from within the vertex shader and displacing the vertices vertically. Instead of scrolling the geometry, just use a fixed patch that fills the frustum and translate the height map. You’ll of course need to translate it in discrete steps corresponding to the size of a pixel and then additionally translate the patch within the range [0, terrain_unit]. So something like:

texture_uv_translate = int(position/terrain_unit) / float(texture_size)
patch_translate = position % terrain_unit

All can be done in the shader.
This way you could have enormous wrapping terrain but draw only what’s actually visible.
You’ll need to supply and sample the normal map along with the height map if you plan to use lighting.

Interesting fix, I might have to check it out soon.

Interesting Thread will see