Generating Texturespace Fragment Shader

I’d like to share a code snippet which generates a texture space vectors on the fly in the fragment shader. All you need is the vertex position, vertex normal and texture coordinates passed to the fragment shader as varyings:


mat3 genTextureSpace(in vec2 texcoords, in vec3 vertexpos, in vec3 vertexnormal) {
    vec2 st0 = dFdx(texcoords);	
    vec2 st1 = dFdy(texcoords);
    vec3 q0 = dFdx(vertexpos);
    vec3 q1 = dFdy(vertexpos);
    vec3 N = normalize(vertexnormal);
    vec3 T = (q0*st1.y - q1*st0.y);
    vec3 B = (-q0*st1.x + q1*st0.x);
   
   //handle mirrored texturespaces
    if (dot(N, cross(T,B))<0.0) {
        B=-B; T=-T;
    }
  
    //orthogonalize B and T to N;
    //this way the interpolated vertexnormal "smoothes" the generated texture space across the triangle
    T -= N*dot(T, N);	   
    T = normalize(T);
    B -= N*dot(B, N);
    B = normalize(B);

    //the resulting matrix should be used to transform the fetched normalmap-vector 
    //into the same space 'vertexposition' and 'vertexnormal' are given
    return mat3(T,B,N)
}

This technqiue is based on
http://hacksoflife.blogspot.com/2009/11/per-pixel-tangent-space-normal-mapping.html

The way to handle mirrored texture coordinates is somewhat guessed, but works nicely. If anybody has an idea, why the problem exists and why it gets fixed this way, I’d like to know. I was kind of surprised to need this, since my CPU based code doesn’t do this. On the other hand, the CPU code needs to explicitly handle vertices where mirrored textures meet (“seams”): such vertices must get duplicated. The fragment shader based TBN generation doesn’t need this kind of treatment.
Another disadvantage of the fragment shader based technqiue is that it sometimes generates a slightly facetted look on skinned models. This is because there’s no averaging of the TB vectors possible as in the CPU based method.

Edit: I just realized that this post should have been done in the GLSL forum… could somebody move it there?

Shouldn’t be this faster:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=282834#Post282834

And similarly mathematically correct?

The basic idea of the algorithm is to compute the inverse of the 2x2 matrix formed by dFdx(st) and dFdy(st). This matrix allows to transform from screen-space to tanget space coordinates, so if you know how much any value changes in screen space x and y, you can determine how much it changes relative to s and t.

To compute the correct inverse in your code, you would have to divide the T and B vectors by (st0.x * st1.y - st0.y * st1.x), but as the vectors are normalized later anyway, scaling by a constant does not matter. If the constant is positive. If it is negative, you have to negate the T and B vectors.

So I think you can replace the if(dot…) by a multiplication of T and B by (st0.x * st1.y - st0.y * st1.x). This should fix the mirrored cases with fewer instructions.

And similarly mathematically correct?

The code is almost identical… :wink:

So I think you can replace the if(dot…) by a multiplication of T and B by (st0.x * st1.y - st0.y * st1.x). This should fix the mirrored cases with fewer instructions.

I completely missed the point about the sign of (st0.x * st1.y - st0.y * st1.x). By incorporating it back into the code, mirrored textures “just work” without any If statements.