I am looking into doing tessellation of my terrain mesh on the GS. I haven’t tried using the GS as of yet so I am in new territory here. My first question is after you create new polygons to tessellate the terrain where are the normal, tangent vectors going to come from or will you just interpolate them across the new geometry based on the values from the original 4 for quads or 3 for triangles? And what about texture coordinates?
I am looking at eliminating the swim effect for texture coordinates when you have a large span of a polygon e.g. cliff edges. And I would like to basically add in new polygons to have a more detailed rough terrain in certain spots like mountain cliffs or shorelines…
I tried this on a terrain mesh, and I just linear interpolate everything from the corner vertices. But, I’m not performing any smoothing on the output vertices either - if you were, then you could probably use the same curve that you used to smooth the vertex positions, or use a separate curve - google for “curved pn triangles”. I would think for texture coordinates you just linearly interpolate them regardless of whether you are smoothing or not. I set up my GS to tessellate to an arbitrary level, but there seems to be a hardware limit that I hit for anything over 3 levels of tessellation. I haven’t tried mixing tessellation levels on the same piece of geometry, but I think there are methods for doing fractional levels of tessellation that would be good for that.
Here is the shader - for 0, 1, 2, or 3 levels of tessellation max_geometry_output_vertices is 3, 8, 24, or 80 - (2^n + 2) * 2^n. It is just creating a strip for each resulting row of triangles. An improvement would be to hold onto the top vertices from one row and reuse them as the bottom vertices for the next row.
Yeah GS = Geometry Shader like FS = Fragment Shader and VS = Vertex Shader
So you need to send GL_TRIANGLES only to the GS or the GS only outputs triangles when it tessellates the mesh? I haven’t gotten around to playing with the GS yet. Thanks
The GS will take as an input points, lines, triangles, and the adjacency versions of lines and triangles (good for doing real subdivision). Triangles and lines handle both lists and strips. Output types are only points, line strips, and triangle strips.
A new question based on this topic, for tessellating the mesh on lets say a cliff edge, I am referring to the problem where textures become smeared. Am I correct in saying that the GS isn’t going to fix this since it will not make new texture coordinates that will allow less smearing? Thanks
You can output whatever texture coordinates you like from the GS, but if you are just interpolating to generate those new texture coordinates you will have the same problem. So, the GS by itself isn’t enough to fix this problem, you’ll have to come up with some way of hiding the stretching. Perhaps you could use the GS to detect the amount of stretching and blend into a separate texture mapping that has been squashed in the appropriate direction.
What about in the GS you determine the amount of stretching and then lerp to some tile amount for that area?
e.g.
texCoord = mix(1.0, 5.0, stretchAmount);
If this allowable will the texture coordinates only lerp on the newly created geometry? And not effect the rest of the mesh?
I wouldn’t really recommend using geometry shaders for this kind of tessellation - they weren’t designed for producing a large amount of output.
The limit you’re hitting is likely the 1024 float output limit. Since you’re writing (float4 position, float3 normal, float2 texcoord) you can only write about 113 vertices maximum in a single pass.
It may be faster to do the subdivision recursively - i.e. do one level of subdivision per pass.
Calculating smooth normals for geometry generated by a GS can be a problem. One solution is to stream out the vertices to a vertex buffer (using NV_transform_feedback), and then average the face normals in separate pass.
I’d recommend the GS for silhouette detection / shadow volume extrusion, shell and fin generation for fur rendering, curves, custom point sprites, and other applications that generate a relatively small amount of data.
Originally there was going to be a separate tessellation engine in DirectX 10 hardware but it was removed. The GS was mainly designed for post-processing the output of the tessellator.
What about single pass cubemap creation and single pass stereo rendering?
I think so. I haven’t really looked at these extensions yet (I’ll be all over them in January), but the hardware is certainly capable.