In search of more interpolants....

I have a program that creates an optimized single-pass per-pixel lighting shader (using ARB vertex and fragment programs) upon import of an artist created object that’s been exported from Max or Maya. These shaders are made for Geforce 6 (ps3.0) level hardware.

Unfortunately, I’m now getting some models that require shaders to pass more interpolants between the vertex and fragment programs than is currently allowed, and I was hoping someone could help with some ideas to get around this.

Here’s how my current shader system uses the 8 texture coordinate interpolants:

texcoord units 0-2: Reserved for passing diffuse, normal, and lightmap texcoords to fragment program (these texcoords may be multiplied by a texture matrix, automatically generated, or just passed through)
texcoord unit 3: reserved for tangent space eye direction computed in the vertex shader
texcoord units 4-7: reserved for tangent space light directions computed in the vertex shader (I want to support up to 4 per pixel lights per object)

The artists wish to add diffuse and normal detail maps to the models, but as you can see I am out of generic interpolators :frowning: .

The ideal solution to this problem would allow me to create a shader to render a generic material with any number of texture coordinate sets affected by any number of lights (speed is not so important).

An acceptable solution would be finding some way to free up 2 more slots to support the immediate need of a base detail and normal detail texture.

Due to the complexity of the shader creation code and my sanity maintaining it, I really want to avoid any multi-pass solutions.

Thanks in advance :slight_smile:

Hello Zeno,
I think you can put as many lights as you want and eye as uniform param in pixel shader (all coordinates in world space) and calculate the world to tangent space basis matrix in vertex shader, pass it to pixel shader (3 varying instead of the previous 5), renormalize it and save the two texcoords you need.
If the tangent space is regular (geometry well tesselated) you should have the same render quality.

Hi Symmenthical, thanks for the idea :slight_smile:

Some quick thoughts:

The upside of your idea is that I could pass just the tangent and normal to the fragment program (and compute the binormal there, which actually saves instructions - cross product instead of normalize). This would use 2 varying parameters instead of 5, a savings of 3 slots. More than enough to solve my current problem.

Now, thinking about the cost within the fragment program:

work done before change: normalize eye direction and each light direction in fragment program, and find distance from light to fragment (only 1 more instruction than the normalize) (3+4*n instructions)

work done after change: find, transform, and normalize the eye direction. Normalize the normal and tangent vectors, and take their cross product to get the binormal. Do a matrix-vector multiply on each light position. Calculate light distance for attenuation (14+6*n instructions)

So it would be adding a constant 11 instructions and 2 instructions per light to the fragment program and removing some instructions from the vertex program. I think this would be acceptable, but it would be nice not to add any more calculation to the fragment program, of course :slight_smile:

Any other thoughts or ideas?

are you using all four components of the texture units?
for example the tex coordinates 0 to 2 may only be two dimensional… would it be possible to pack more texture coordinate into one parameter?

For detailmaps: You could reuse the texcoord of the diffuse/normal/whatever texcoord, but simply scale it in the pixel-shader a bit.

That means, you use the same texcoord to lookup the diffuse base texture and the diffuse detail texture, but before looking up the detail texture, you scale the texcoord by, say 10, which will shrink the detail texture.

I did that some time ago, it works very well.

Jan.

are you using all four components of the texture units?
No, not usually. Normally only 2 are used, but there are 3 for animated textures and potentially all 4 could be used for projected textures.

This is a good idea with minimal cost, though, that would work for most cases :slight_smile: .

For detailmaps: You could reuse the texcoord of the diffuse/normal/whatever texcoord, but simply scale it in the pixel-shader a bit.
I thought of this, but the issue is that this is all artist created content so I’d need some way of checking whether the texcoords were related by a simple scale (or, more generally, some texture matrix) and some graceful way to fall back if they weren’t.

I would guess, though, that 90% of the time the texcoords would only differ by a scale…

Thanks for the ideas everyone, I think I’ll be able to use some combination of the above to solve the problem :slight_smile:

Alternatively you could simply put the x and y coordinates of the normalized tangent space light direction in the interpolator. That way you could pack two lights per interpolator. You can calculate the z-component in the fragment shader.