GS tessellation for terrain meshes

I am looking into doing tessellation of my terrain mesh on the GS. I haven’t tried using the GS as of yet so I am in new territory here. My first question is after you create new polygons to tessellate the terrain where are the normal, tangent vectors going to come from or will you just interpolate them across the new geometry based on the values from the original 4 for quads or 3 for triangles? And what about texture coordinates?

I am looking at eliminating the swim effect for texture coordinates when you have a large span of a polygon e.g. cliff edges. And I would like to basically add in new polygons to have a more detailed rough terrain in certain spots like mountain cliffs or shorelines…

*bump anyone?

What’s a GS?

He probably means a geometry shader, or maybe a grumpy Sasquatch.

I tried this on a terrain mesh, and I just linear interpolate everything from the corner vertices. But, I’m not performing any smoothing on the output vertices either - if you were, then you could probably use the same curve that you used to smooth the vertex positions, or use a separate curve - google for “curved pn triangles”. I would think for texture coordinates you just linearly interpolate them regardless of whether you are smoothing or not. I set up my GS to tessellate to an arbitrary level, but there seems to be a hardware limit that I hit for anything over 3 levels of tessellation. I haven’t tried mixing tessellation levels on the same piece of geometry, but I think there are methods for doing fractional levels of tessellation that would be good for that.

Here is the shader - for 0, 1, 2, or 3 levels of tessellation max_geometry_output_vertices is 3, 8, 24, or 80 - (2^n + 2) * 2^n. It is just creating a strip for each resulting row of triangles. An improvement would be to hold onto the top vertices from one row and reuse them as the bottom vertices for the next row.

#version 120
#extension GL_EXT_geometry_shader4 : enable

// input
varying in vec3 normal_vout[];
varying in vec2 tex_coord_vout[];

// output
varying out vec3 normal_gout;
varying out vec2 tex_coord_gout;

uniform int subd_level;

void main()
{
	const int strips = 1 << subd_level;
	const float step = 1.0 / strips;

	float w1 = 0;
	float w2 = step;
	for (int i = 0; i < strips; i++)
	{
		float u1 = 0;
		float u2 = 0;
		float v1 = 1.0 - float(i) * step;
		float v2 = 1.0 - float(i + 1) * step;

		int verts_top = strips - i;
		for (int j = 0; j < verts_top; j++)
		{
			// bottom
			gl_Position = u1 * gl_PositionIn[0] + v1 * gl_PositionIn[1] + w1 * gl_PositionIn[2];
			normal_gout = u1 * normal_vout[0] + v1 * normal_vout[1] + w1 * normal_vout[2];
			tex_coord_gout = u1 * tex_coord_vout[0] + v1 * tex_coord_vout[1] + w1 * tex_coord_vout[2];
			EmitVertex();

			// top
			gl_Position = u2 * gl_PositionIn[0] + v2 * gl_PositionIn[1] + w2 * gl_PositionIn[2];
			normal_gout = u2 * normal_vout[0] + v2 * normal_vout[1] + w2 * normal_vout[2];
			tex_coord_gout = u2 * tex_coord_vout[0] + v2 * tex_coord_vout[1] + w2 * tex_coord_vout[2];
			EmitVertex();

			u1 += step;
			u2 += step;
			v1 -= step;
			v2 -= step;
		}
		// bottom
		gl_Position = u1 * gl_PositionIn[0] + v1 * gl_PositionIn[1] + w1 * gl_PositionIn[2];
		normal_gout = u1 * normal_vout[0] + v1 * normal_vout[1] + w1 * normal_vout[2];
		tex_coord_gout = u1 * tex_coord_vout[0] + v1 * tex_coord_vout[1] + w1 * tex_coord_vout[2];
		EmitVertex();

		EndPrimitive();

		w1 += step;
		w2 += step;
	}
}

edit - the GLSL geometry shaders don’t support quads, and the NVASM ones sort of do (by splitting them into triangles), if that is an issue for you

Yeah GS = Geometry Shader like FS = Fragment Shader and VS = Vertex Shader

So you need to send GL_TRIANGLES only to the GS or the GS only outputs triangles when it tessellates the mesh? I haven’t gotten around to playing with the GS yet. Thanks

The GS will take as an input points, lines, triangles, and the adjacency versions of lines and triangles (good for doing real subdivision). Triangles and lines handle both lists and strips. Output types are only points, line strips, and triangle strips.

A new question based on this topic, for tessellating the mesh on lets say a cliff edge, I am referring to the problem where textures become smeared. Am I correct in saying that the GS isn’t going to fix this since it will not make new texture coordinates that will allow less smearing? Thanks

You can output whatever texture coordinates you like from the GS, but if you are just interpolating to generate those new texture coordinates you will have the same problem. So, the GS by itself isn’t enough to fix this problem, you’ll have to come up with some way of hiding the stretching. Perhaps you could use the GS to detect the amount of stretching and blend into a separate texture mapping that has been squashed in the appropriate direction.

What about in the GS you determine the amount of stretching and then lerp to some tile amount for that area?
e.g.
texCoord = mix(1.0, 5.0, stretchAmount);

If this allowable will the texture coordinates only lerp on the newly created geometry? And not effect the rest of the mesh?

I wouldn’t really recommend using geometry shaders for this kind of tessellation - they weren’t designed for producing a large amount of output.

The limit you’re hitting is likely the 1024 float output limit. Since you’re writing (float4 position, float3 normal, float2 texcoord) you can only write about 113 vertices maximum in a single pass.

It may be faster to do the subdivision recursively - i.e. do one level of subdivision per pass.

Calculating smooth normals for geometry generated by a GS can be a problem. One solution is to stream out the vertices to a vertex buffer (using NV_transform_feedback), and then average the face normals in separate pass.

http://developer.download.nvidia.com/opengl/specs/GL_NV_transform_feedback.txt

What would you recommend the GS for then? I was under the assumption that the GS would be somewhat of a tessellation unit… Thanks

I’d recommend the GS for silhouette detection / shadow volume extrusion, shell and fin generation for fur rendering, curves, custom point sprites, and other applications that generate a relatively small amount of data.

Originally there was going to be a separate tessellation engine in DirectX 10 hardware but it was removed. The GS was mainly designed for post-processing the output of the tessellator.

What about single pass cubemap creation and single pass stereo rendering? Is this possible and recommended?

Simon, when you say a small amount of data, do you mean with respect to the input, or absolutely?

Small amout of data with regards to the each vertex.

What about single pass cubemap creation and single pass stereo rendering?
I think so. I haven’t really looked at these extensions yet (I’ll be all over them in January), but the hardware is certainly capable.

Small amount of data with respect to the whole geometry program.

You can reduce the output by reducing the number of vertices and by reducing the number of scalar attributes per vertex. Doing both is even better.