Need help with register combiners

Hi
I’m trying to transform light vector into pixel’s tangent space.The tangent space is defined by free signed textures a normal map,a tangent map and a binormal map.My goal is to store transformed light vector in the frame buffer for future calculations.

tex0:light vector L,normalized by cubemap.
tex1:normal map N.
tex2:tangent map T.
tex4:binormal map B.
result should be RGB = (LT,LN,L*B) where * means dot product.

I can easy compute all 3 dot products in the first 2 combiners.

that is
spare0 = (LT,LT,LT);
spare1 = (L
B,LB,LB);
primary = (LN,LN,L*N);

But I have no idea how to pack them into single vector (LT,LN,L*B) in a single pass.
Is it possible at all ?
Thanks.

[This message has been edited by Michail Bespalov (edited 05-03-2001).]

Since you are using 4 textures, I am going to assume you are using a GF3. (You can’t do this on a GF1/2.)

Once you get the three dot products, you essentially want to compute:

(1,0,0)(L dot T) + (0,1,0)(L dot N) + (0,0,1)*(L dot B)

This gives you the RGB triple (L dot T, L dot N, L dot B).

So you just need to load up the right constant colors and use another stage or two to build the

I think you can build this RGB triple in 3 combiner stages and the final combiner, if you do it right…

Note that if all you want to do is feed this RGB triple of dot products back in as a texture coordinate, say, into a cubemap, the texture shaders are the way to go. They can also do a 3x3 matrix multiply per pixel; the math is a little different than the way you’re doing it.

  • Matt

Oh yeah. You can do a 3x3 color matrix on a GF1/2, but you can’t use all those textures the way you are using them here, and it’s pretty nasty (I think you need to use one of the texture units for a 1x1 constant color texture).

  • Matt


(1,0,0)(L dot T) + (0,1,0)(L dot N) + (0,0,1)*(L dot B)

Thank you.


Note that if all you want to do is feed this RGB triple of dot products back in as a texture coordinate, say, into a cubemap, the texture shaders are the way to go. They can also do a 3x3 matrix multiply per pixel; the math is a little different than the way you’re doing it.

Yes,I want to use this RGB as texture coordinates for lookup into cubemap.But I don’t understand how to transform vector in the texture shaders.

I’ll try to explain a whole task.
I’m trying to render bumpy surface with separated BRDF.Since tangent space for bumpy surface can change per pixel and BRDF is a function of vectors in tangent space,I must transform light vector prior to performing the lookups into the cubemap or 2D map and I must supply tanget space for each pixel.Yes, texture shaders can do a 3x3 matrix multiply per pixel,but dot products are performed on the texture coordinates,but I want to transform vector by a 3x3 matrix,that comes from a texture maps(from normal map,tangent map and binormal map).I can’t do this.

If you have some ideas on how to do vector transformation into tangent space (the tangent space changes per pixel) and then do a dependent lookup into a cubemap or 2D map,I’d like to hear them.
Thanks.

[This message has been edited by Michail Bespalov (edited 05-04-2001).]

It’s not possible to feed that back in as a texture coordinate.

  • Matt

Hi Michail. I think what you’re trying to do is very interesting, but it’s ahead of the GF3 paradigm. Any attempt to derive texture coordinates from pixels/texels will lead to loss in accuracy which is crucial to you, as I can see. Therefore you’ll have to use per-vertex defined tangent-space. And I don’t think it’s a bad way. Maybe GF4 will once change things?
Alexei.

[This message has been edited by Alexei_Z (edited 05-04-2001).]

There is probably a way to do what you’re trying with texture shaders, expectially since register combiners can’t feed their results into a texture unit.

It is possible.

pass1:
render model to frame buffer or pbuffer.
tex0:light vector L, normalized by cubemap.
tex1:normal map N.
tex2:tangent map T.
tex4:binormal map B.

In the register combiners compute RGB = light vector in the pixel’s tangent space .

read it back into texture T1.

pass2:
Render a screen-aligned quad .Use texture shaders to do a dependent cubemap lookup, pass an identity matrix as texture coordinates for texture units 1, 2, and 3.
tex0: 2D lookup into texture T1.
tex1: dot product
tex2: dot product
tex3: dot product cube map

This is not my method.
I think this should work. The problem is that bumpmapping plus BRDF become a 4-pass method with 2 copying from frame buffer to texture, sounds too expensive.

I’m looking for a better way. If anyone have ideas, I’d love to hear them.

Thanks.

[This message has been edited by Michail Bespalov (edited 05-07-2001).]