ARB Fragment Program

Okay, please do not flame this post.

I have been studying the ARB fragment program extensions, but I could not figure out how to do this. I need to interpolate position and normal
for each pixel. The idea is I need to execute
an equation involving position + normal. Basically, I need to “shoot” a ray from each pixel of a model to an invisible plane, and the determine the intersection point. I can then use this in a lookup function to generate a color.

I can do this easily in a vertex program, but not so in a fragment program. I know I can pass the normal in the color of each vertex, and let the system interpolate that normal. But I need the position prior to rasterization.

Am I asking for the impossible. Thanks in advance.

What position prior to rasterization? Do you mean world space info? You could pass this in and have it interpolated. Just pass the vertex position info to the fragment program by asigning it to an additional interpolated parameter prior to multiplication by the modelview & projection.

I am confused. How would you do that. The vertex position should be multiplied by the model-view, and for each pixel interpolated. That is for tthe pixel, just prior to rasterization, the x,y,z.

I fail to see where the problem is. Transform your position and your normal in the vertex shader, and pass them in two of the texture coordinates interpolators. You’ll get the interpolated values per pixel - all’s left for you is to renormalize the normal if needed (its length can be invalid in the pixel shader since it’s linearly interpolated) and to sample your lookup texture.

Y.

Well, I was looking for some code as you just described, but I cannot find it. Can you please post a a couple of lines showing how to do this.

Thank you very much.

MOV result.texcoord[0] position;
MOV result.texcoord[1] normal;

where position is the position you want to have interpolated along the triangle and normal the normal.

to access those in the fragment program you simlpy have to access them there through fragment.texcoord[0] and fragment.texcoord[1] respectively

Now, the precision will be 3 @ 32 bit. That is for tex coordinates s t r, each will be a 32 bit float. From what I can tell this is not true.

Yes, texture coordinates are interpolated at float precision, though not necessarily 32 bits. Just try it, and you’ll find it works.

I tried a variation on your suggestion. The result was not good. I passed in the tex coordinate like so

glEnable(GL_TEXTURE_3D);
for i = 1 to 3
glTexCoord3f( (norm[i][0]+1.0)/2.0,(norm[i][1]+1.0)/2.0,
(norm[i][2]+1.0)/2.0);
glVertex3fv( vert[i] );

Then in my fragment program, I access the normal from TEXCOORD0

and multiplied by 2, subtracted 1

And the results showed the same “quality” as if I had passed the norms into glColor3f

Ie

glColor3f( (norm[i][0]+1.0)/2.0,(norm[i][1]+1.0)/2.0,
(norm[i][2]+1.0)/2.0);

and the used COLOR0 instead of tex coord 0.

This seems to imply that tex coordinates are 8 bit values. Is this true? If so, then how does you method do any better? Thanks.

I don’t think you have to pass only unsigned values, so the scale and bias are unnecessary.

There is definately more than 8 bits of precision for tex coords, and they can be signed.

Are you 100% sure your lack of precision is coming from passing the normals onto the pixel shader ? Ie. you do not suffer from a lack of precision in your “norm” array before ?

Are you running your color buffer in 32 bits ?

Y.

I think the texture coordinates, like color are clamped to [0.0-1.0] are they not? If this is not the case, I can change. However, that would make little difference.

As to the normal precision, I used 3 float(32 bit) to store the normals. If I pass using the regular glNormal3fv, then I can “see” that the normals are rendered correctly and I am not losing any precision. However, as I said, If I pass the normals via glColor3f, or glTexCoord3f, I seem to lose precision.

Here is the cg program

  
struct IN
{
float3 normal : TEXCOORD0;
};

struct OUT
{
float4 color : COLOR;
}

OUT main(IN i)
{
   OUT o;
   float4 viewer = float4(0,0,1,0);
  float4 red = float4(1,0,0,1);
  float4 blue = float4(0,0,1,1);
  float def = 1.0;
  float epsilon = 0.0005;//can be zero for 
  //exact  match
  float4 v1,v4;
  float dotP;

  v1.xyz = i.normal*2;
  v1 = v1-1.0;
  v4 = v1*viewer;
  dotP = v4.x+v4.y+v4.z;

  o.color = ( abs(def-dotP) < epsilon ) ? red : blue;

  return 0;
}

I also tested by passing just the dotP, and it seems like there is a lack of precision. Ie areas with different dotPs are shaded exactly the same.

I did that by doing 
o.color = float4( dotP, dotP, dotP, 1);

Thanks for the help

The interpolated values of texture coordinates are not clamped. Clamping may happen when you use the textures coordinates to sample a texture (depending on the texture wrap setting). It works more like positions than colors. The value can be anything, but it may be “clipped” when it’s used.