Adjacency primitives from tessellation output

Hello,

I wonder if the tessellation primitive generator could output triangles_adjacency and lines_adjacency primitives from triangles/quads and isolines subdivision modes, respectively, to be used in the geometry shader.

One benefit is that it’d be much easier to calculate normals and other attributes from geometry that is displaced in the tessellation evaluation shader.

I believe the adjacency information is already available to the geometry shader, regardless of where the triangle or line primitives originate.

Incidentally, it should also be very easy to do what you want within the tessellation evaluation shader. It has all the GL_PATCHES primitive information available to it, which is more than the geometry shader (with or without adjacency information) ever has.

I couldn’t make it work, the render fails with GL_INVALID_OPERATION if the geometry shader has “layout(triangles_adjacency) in” when using tessellation shaders. It only works when using a draw call with GL_TRIANGLES_ADJACENCY primitive type directly. The specs also don’t say anything about it.

The problem is that, in my case, the computation done by the evaluation shader is very expensive, so I’d like to do it only once per vertex. I would need 3 calculations per vertex instead of one to compute normals in the evaluation shader, and that would be avoided by using adjacency information in the geometry shader.

I see your point. That makes sense. As you wrote, the specs don’t say anything about it, which is why I assumed that they already allowed what you proposed, especially given that the tessellation primitive generator already has all the information required to do so.

There is a potential problem with calculating normals in the geometry shader from tessellation shader generated primitives, though, in that the normals will then in general depend on the number of tessellation subdivisions. So, if you dynamically control the tessellation levels, your surface normal for a given point on the surface will vary dynamically, as well.

One benefit is that it’d be much easier to calculate normals and other attributes from geometry that is displaced in the tessellation evaluation shader.

Wouldn’t it make more sense to compute those attributes from the tessellation evaluation shader? Isn’t that what that shader stage is for?

As I said above, sometimes it may be expensive to compute the vertex position in the tessellation evaluation shader. In my case, it’s based on fluid simulation equations. To get the normal I’d also have to calculate two adjacent vertex positions for each vertex evaluated. This is three times the effort required compared to using adjacency information from the geometry shader.