Compute quad and tria normals automatically

I’m a new member, so sorry if this is the wrong part of the forum, but here is my problem:

I’m an engineer and I write software that is used to display Finite Element (FE) meshes. These are made up of a mixture of trias and quads, arbitrarily connected at their vertices, and there can be several million in a typical mesh.

For engineering purposes rendering works best if one uses “flat” lit primitives, ie each quad or tria has a single outward normal that is used for all its vertices. (Rendering speed is everything, and clever graphics are neither helpful nor wanted.)

At present I compute these normals and store them as signed byte triples (an accuracy of one part in +/-127 is good enough) then, qualitatively, to render a quad I do:

gl_normal(quad normal)
gl_vertex(vertex #1)
& so on to
gl_vertex(vertex #4)

And it all works fine.

However there are two problems with this approach:

#1 I have to compute and store all these normals, then send them down to the graphics card. This costs me a lot of storage and also a lot of wasted bandwidth.

#2 I can’t use Vertex Arrays for the normals since I only have one normal per quad/tria, rather than one normal per vertex. (Am I right that there is no way to use Vertex Arrays with “per quad/tria” rather than “per vertex” normal data?)

As far as I can see there is no existing intrinsic OpenGL command that will compute normals for me (leaving aside Bezier patches and/or Nurbs surfaces which would be a major over-kill for this) - am I right about this? It’s frustrating since it is such a simple process and the graphics card could do it so easily.

I’ve also looked at vertex programming, and it’s really frustrating that all the mathematical operations I need (cross products etc) are there, but I can’t see any way of using them on a “per quad” as opposed to “per vertex” basis.

Am I missing something blindingly obvious? Or perhaps someone has already come up with a clever solution to this problem?

Any input would be welcome.

If all you need is to compute per-face lighting, so the best choice so far (if you don’t want to loose any bandwidth) - is to compute it via geometry shader, which has face as it is. You are flat-shaded - so all vertices are disjoint and are not shared between neighbours, so no extra computations. But GS speed has strong dependancy on output size, so you have to manage it very accurate.
Me, I would rather prefer to pack those CPU-calculated normals in each vertex, saving everything into one big VBO and don’t think about it. Now you need normal, then you would need color - who knows? More flexible solution always survives :slight_smile:

Oh man, you don’t really need a geometry shader to do flat shading without normals. There is a faster way which works on SM3.0 hardware.

It’s quite simple:
Pass the position of the vertex from your vertex shader to the pixel shader. The coordinate space you will choose is the one the normal will be in. To get the normal in the pixel shader, use the cross product on partial derivatives of the position with respect to X and Y. A code sample in CG/HLSL:

normal = normalize(cross(ddx(pos), ddy(pos)));

Eosie
Your solution won’t work on faces edges, where dd* are not well-defined - you would get blocky representation.
But generally speaking - yes, you’re right about partial derivatives, I forgot about them.

Partial derivatives work in this case. They are faster than the geometry shader method and even work on older hardware that has no GS. There are no blocky artifacts at polygon edges in the computed normals.

Thanks skynet, I just didn’t use dd*() for normals computing, so I wasn’t sure about derivatives correctness.