Simple "flat shader" (without geometry shader)

i’m asking myself how to create a program that exactly does what is known as “flat shading”

assumung a triangle:
flat shader says that the resulting color for all pixels that cover the face is the (arithmetic) average color:
color(any pixel) = (color(vertex1) + color(vertex2) + color(vertex3)) / 3

but without geometry shader, i dont have access to all three vertices (or results of vertexshader invocations)
when passing the resulting color of 1 vertexshader invocation as “flat out vec3 color”, then the 2 previous invocations will be just discarted
passing “(smooth) out vec3 color” will result in interpolating the 3 colors within the triangle, which is NOT “flat shading”

questions:
am i missing something ?
do i need to have the geometry shader in my program to make correct flat shading ?
what’s with “gouraud shading” ?

[QUOTE=john_connor;1283967]i’m asking myself how to create a program that exactly does what is known as “flat shading”
[/QUOTE]
Apart from using a geometry shader, the other options are:

  1. For each triangle, set the normal of the last vertex to the face normal. In the vertex shader, use the “flat” interpolation qualifier on the normal (or on any property derived from it, e.g. colour).

Note that this will reduce your ability to share vertices between triangles, as vertices with the same position will have different face normals. Although only one vertex of each triangle needs a valid normal, large triangle meshes tend to have approximately twice as many triangles (and thus face normals) as vertices, so you’ll end up with roughly double the number of vertices. Also, constructing the topology so as to minimise the number of vertices required isn’t trivial for arbitrary meshes.

  1. Perform flatshading in the fragment shader. If you have an interpolated variable P containing the position, then the face normal can be calculated as cross(dFdx§, dFdy§). The normal will be in the same space as P, e.g. if P is in eye space, so is the calculated normal.

This avoids the need to modify anything else in the rendering pipeline (you’d typically already have the eye-space position available to the fragment shader if you’re performing per-fragment lighting calculations). However, it’s inefficient, as you’re calculating the normal for every fragment even though it’s constant across the surface of the primitive.

ok, thanks for your reply!

i think i’ll go for 1.), i thought that there could be an issue when applying “pointlight” which have distance-dependent attenuation, but …
that matters only for large faces near the light source

[QUOTE=john_connor;1283979]
i think i’ll go for 1.), i thought that there could be an issue when applying “pointlight” which have distance-dependent attenuation,[/QUOTE]
Distance-based attenuation is likely to be less of an issue than the fact that the light direction changes across the surface. That’s an issue if you want actual flat-shading rather than just flat (faceted) surfaces. This is equally applicable to either method.

Using the [var]flat[/var] qualifier on the variable holding the lighting direction will avoid that. But then the light direction will be that for one of the vertices. If you want the light direction calculated for e.g. the face centroid, you’d need to add another attribute to each vertex. As with the normal, this would be [var]flat[/var]-qualified and only need to be valid for the last vertex of each triangle (or optionally the first; see glProvokingVertex).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.