Terrain rendering techniques

Hi everyone

I am working on a terrain rendering engine and I have a few questions about rendering techniques. I am an OpenGL autodidact with only a few years of experience so please forgive me if my questions are trivial.

My engine is intended for projection on big surface for flight simulation purposes.
My main constraints are :

  1. avoid brutal LOD transition or any visible artifacts because something unnoticeable on a monitor will definitely draw attention on a 3 meters wide screen
  2. keep constant 60 fps at all cost
    I don’t have constraints on legacy hardware support so I can use any OpenGL 4.5 feature if necessary.

My terrain LOD algorithm is based on CDLOD. To sum it up, it’s a quadtree based algorithm and the whole terrain is drawn using a unique geometry patch. For each node, the geometry patch can be dynamically morphed in the vertex or tesselation shader to seemlessly match it’s lower detailed neighboring nodes.
The result is just perfect, I can render massive scenery with an insane amount of triangles and without any popping artifacts.

The downside of it is having only one piece of geometry to render very different materials. A lot of examples just use aerial imagery but my engine will be based on landclass texture splatting.
To avoid branching and since the land type is sampled from a mask texture, I end up having a massive fragment shader computing all potential materials and sampling a lot of textures to finally select only one depending on this landclass mask. And of course, it also implies sending a lot of uniforms and binding a lot of textures.
I can do some optimizations like grouping the tiles that are only composed of water and draw them separately using a water-only shader but that doesn’t help with silly case like 100kmx100km terrain tile with a small lake ; the water material will be uselessly computed for all fragments
At the end, the fragment shader seems to be my bottleneck. I’m also quite sure that the uniform update and texture switches for each quadtree node has a great cost.
I’m searching for ways to improve this but I lack knowledge on rendering techniques. Do you have any suggestions on this ?

Another problem I face is the rendering of vector data. I want to render roads in my engine but given the terrain algorithm, roads can’t be cut into the geometry…
I also did not try rendering roads as separate geometry following the terrain because I was worried about intersection and z fighting
I opted for render to texture. For each new tile in a close vicinity, the roads are rendered in a texture that will be applied to my tile patch. Unfortunately the cost of binding an FBO (cycling through a batch of FBOs and attaching a new texture) for each tile seems to be too important and I end up with stutters.
Any suggestion on this ? Completely different method are of course welcomed, maybe my hope for massive realtime render to texture at 60 FPS was hopeless…

Thanks for sharing your ideas !

Sylvain

Oops, wanted to post this topic in “advanced” section rather than here. Can it be moved please ?

Move to Advanced as requested.

Ok, since no one answered, I guess my question is not clear… sorry about that.
Let me reformulate it :

I) Render to texture FBOs :
Obviously, rendering to texture using a single FBO and switching attachment is a bad idea. On the other hand, having exactly one FBO per RTT operation can quickly fill memory, especially if you want to keep the generated textures over time. Is there an efficient way between those 2 to do RTT and keep the textures.
Example : Would it be efficient to request FBO in a pool, render to texture, use this FBO color target in render thread while copying it in a new texture in a working thread, release FBO and use the new copied texture ?

II) Materials :
I have a small set of geometry that is rendered a lot of times, each time with different uniforms and textures. A single material per drawcall will lead to too much driver overhead (uniform update, texture switch,etc…) and having too much materials per drawcall will lead to heavy memory occupation and fragment shading. Is there a better option than just trying to find the good compromise between those two ?

Several things:

the whole terrain is drawn using a unique geometry patch

This won’t be helpful. Draw as few as you can.

the land type is sampled from a mask texture, I end up having a massive fragment shader computing all potential materials and sampling a lot of textures to finally select only one depending on this landclass mask

So you mean you bind all your textures at once in many texture units ? A bit like above, don’t consume for things you don’t need (here, you don’t see).

Regarding your situation, if you cannot change your rendering algorithm, you might try considering procedural texturing.

I mean with a single VBO used multiple times (once for each quadtree leaf)

[QUOTE=Silence;1283064]
So you mean you bind all your textures at once in many texture units ? A bit like above, don’t consume for things you don’t need (here, you don’t see).
Regarding your situation, if you cannot change your rendering algorithm, you might try considering procedural texturing.[/QUOTE]

Yes I bind all potentially used textures before rendering the terrain. Especially with tiles covering a great surface, all kind of terrain textures will probably be used.
I considered using procedural texturing but most of my landclass textures have patterns like crop fields, urban areas, swamps… hard to reproduce with procedural texturing.

Do you have so many different materials and textures ? With texture splatting you generally select one texture over several, not over many. And you select it doing a quick test (texture look-up). You then might consider grouping your textures. I.e.: having only a single image for the whole grass, including the borders that bring to the forest, or near a road, or near the water. Which part of grass will be used as a texture will be choose regarding the texture coordinates. You will have a second image that will treat about another group, for example the trees, and you will select which texture tree to render depending again on the texture coordinates and not a big computation done for each fragment.

You might also consider a different workflow: decide about the material in the vertex shader, not the fragment shader. This, however, implies to do the masking on the vertex level, instead of using an image.

Yes I do, you can quickly have a lot of different materials. Most people will use texture splatting just with a few simple materials like grass/rock/dirt but I need more to represent more land class.

That’s the problem ! If I’m not wrong here, this is not really a selection but a mix : I mean you never select a texture and ignore others, you always sample all available textures and mix the result.
Typically, if you want to do texture splatting with 4 materials, you’ll have :


vec4 mask = texture(maskSampler,uv).rgb; 

vec3 mat0 = texture(mat0Sampler,uv).rgb;
vec3 mat1 = texture(mat1Sampler,uv).rgb;
vec3 mat2 = texture(mat2Sampler,uv).rgb;
vec3 mat3 = texture(mat3Sampler,uv).rgb;
vec3 final = mix( mix( mix( mix(default,mat0,mask.r) , mat1, mask.g), mat2, mask.b), mat3, mask.a);

Most of the time, only one material is used but you have to do it this way just to handle the rare case where all textures are mixed.
This is the bottleneck for my engine, my frame time is divided by 6 if I sample only the default material.
The only way I can see to avoid this is doing if test before sampling but branching might not help with performance…

I see what you mean but I think we have the same problem I described above. In order to handle the case where your fragment is at the intersection of grass, road, forest and water, your fragment shader will have to sample all those textures and mix the result.

I might have a solution, this problem has been tackled by Dice with their first Frostbite engine. They published a paper called “Terrain rendering in Frostbite using procedural shader splatting”.
They faced the problem of having too much materials to mix in their splatting shader. Their solution consists in generating a different shader for each possible combination of material and assign the appropriate shader to each tile.
This is quite simple to implement and I guess it will be very efficient for small tiles using 2 or 3 materials most of the time.
For larger tiles, pregenerated textures might be the better option since there are fewer of them and they are displayed at longer distances.

In a more recent paper, they even improved that technique using virtual texturing. If I understand correctly, instead of running their procedural splatting shaders every frame, they run it once, store the result in a virtual texture which will be used every frame.
I’ll consider implementing this as a second step.

Thank you very much Silence for your help and proposals. I hope this topic will help people facing the same issues.