PDA

View Full Version : gl_NormalMatrix Replacement



ViolentHamster
09-27-2010, 03:20 PM
I'm changing my application to get away from the shader builtins. What's the preferred method to replace gl_NormalMatrix? I'm assuming computing the transpose of the modelview inverse on the CPU is the best approach. Maybe this belongs in the math forum, but can someone point me to fast matrix inversion code?

Thanks.

Dark Photon
09-27-2010, 06:59 PM
Might check out what Mesa (http://www.mesa3d.org) does. Probably some optimization for orthonormal and orthogonal MODELVIEWs with uniform scale (i.e. NORMALIZE, RESCALE_NORMAL -- old fixed-function things).

ViolentHamster
09-28-2010, 07:22 AM
I took a look at Mesa. Unfortunately, it supports only OpenGL 2.1. GLSL 1.3, which was introduced along with OpenGL 3.0, was the first GLSL version to deprecate gl_NormalMatrix.

Right now, I lazily compute the NormalMatrix and call glUniform on the 4 matrices-- NormalMatrix, Projection, ModelViewMatrix, and ModelViewProjection. The compute and upload is done right before draw calls with dirty matrices. However, the performance is terrible.

Before I upgraded my application from GLSL 1.2 to 3.3, my frame rate for a particular view was 29 ms. After the upgrade, my frame rate was 38 ms. If I remove the inverse computation for the NormalMatrix, the frame rate improves to 30 or 31 ms.

So, maybe I just have a awful matrix inverse routine. Still, I was using one generic glLoadMatrix before. So, I imagine the driver would have had to use a generic inverse method.

Isn't everyone else in the same situation? What have other people done to minimize the performance impact?

ViolentHamster
09-28-2010, 07:46 AM
The Real Time Rendering book suggests using the matrix adjoint to compute the normal matrix.

I tried that and now my new implementation is about 2 ms slower.

Dark Photon
09-29-2010, 06:02 PM
I took a look at Mesa. Unfortunately, it supports only OpenGL 2.1. GLSL 1.3, which was introduced along with OpenGL 3.0, was the first GLSL version to deprecate gl_NormalMatrix.
Yeah, but that doesn't matter. You wanted to know how to compute the normal matrix fast. Mesa supports a software rendering path, so it has to contain normal matrix computation logic.