MelvinEng

03-01-2002, 06:55 PM

Hi everyone,

I've got quite a few doubts over the use of texture space(ie. S, T and N vectors) for per-pixel lighting...

1) if each vertex of a triangle has its own texture space, then does it mean each pixel on the triangle should also have its own texture space(obtained by interpolating between the ones at the vertex)?

2) do we interpolate the L or H vectors between the ones at the vertex in eye space, normalize them, then transform them into model space and finally into the texture space of that particular pixel to be shaded...before performing the lighting calculations?

3) do we need to use the *inverse* of the texture space matrix in order to transform the L or H vectors(already defined in model space) into the texture space? (I reckoned that since the texture space vectors are defined in model space, that means the resulting 3x3 matrix will transform vectors in the texture space to model space and not vice versa - which means we need to use its inverse if we want to transform from model space to texture space...is this correct?)

I've been reading the nVidia docs on per-pixel shading and I must say the details are sketchy at best...which explains why I ended up with more questions than when I started.

Confused OpenGL coder.

I've got quite a few doubts over the use of texture space(ie. S, T and N vectors) for per-pixel lighting...

1) if each vertex of a triangle has its own texture space, then does it mean each pixel on the triangle should also have its own texture space(obtained by interpolating between the ones at the vertex)?

2) do we interpolate the L or H vectors between the ones at the vertex in eye space, normalize them, then transform them into model space and finally into the texture space of that particular pixel to be shaded...before performing the lighting calculations?

3) do we need to use the *inverse* of the texture space matrix in order to transform the L or H vectors(already defined in model space) into the texture space? (I reckoned that since the texture space vectors are defined in model space, that means the resulting 3x3 matrix will transform vectors in the texture space to model space and not vice versa - which means we need to use its inverse if we want to transform from model space to texture space...is this correct?)

I've been reading the nVidia docs on per-pixel shading and I must say the details are sketchy at best...which explains why I ended up with more questions than when I started.

Confused OpenGL coder.