I have been studying the ARB fragment program extensions, but I could not figure out how to do this. I need to interpolate position and normal
for each pixel. The idea is I need to execute
an equation involving position + normal. Basically, I need to “shoot” a ray from each pixel of a model to an invisible plane, and the determine the intersection point. I can then use this in a lookup function to generate a color.
I can do this easily in a vertex program, but not so in a fragment program. I know I can pass the normal in the color of each vertex, and let the system interpolate that normal. But I need the position prior to rasterization.
Am I asking for the impossible. Thanks in advance.
What position prior to rasterization? Do you mean world space info? You could pass this in and have it interpolated. Just pass the vertex position info to the fragment program by asigning it to an additional interpolated parameter prior to multiplication by the modelview & projection.
I am confused. How would you do that. The vertex position should be multiplied by the model-view, and for each pixel interpolated. That is for tthe pixel, just prior to rasterization, the x,y,z.
I fail to see where the problem is. Transform your position and your normal in the vertex shader, and pass them in two of the texture coordinates interpolators. You’ll get the interpolated values per pixel - all’s left for you is to renormalize the normal if needed (its length can be invalid in the pixel shader since it’s linearly interpolated) and to sample your lookup texture.
There is definately more than 8 bits of precision for tex coords, and they can be signed.
Are you 100% sure your lack of precision is coming from passing the normals onto the pixel shader ? Ie. you do not suffer from a lack of precision in your “norm” array before ?
I think the texture coordinates, like color are clamped to [0.0-1.0] are they not? If this is not the case, I can change. However, that would make little difference.
As to the normal precision, I used 3 float(32 bit) to store the normals. If I pass using the regular glNormal3fv, then I can “see” that the normals are rendered correctly and I am not losing any precision. However, as I said, If I pass the normals via glColor3f, or glTexCoord3f, I seem to lose precision.
Here is the cg program
struct IN
{
float3 normal : TEXCOORD0;
};
struct OUT
{
float4 color : COLOR;
}
OUT main(IN i)
{
OUT o;
float4 viewer = float4(0,0,1,0);
float4 red = float4(1,0,0,1);
float4 blue = float4(0,0,1,1);
float def = 1.0;
float epsilon = 0.0005;//can be zero for
//exact match
float4 v1,v4;
float dotP;
v1.xyz = i.normal*2;
v1 = v1-1.0;
v4 = v1*viewer;
dotP = v4.x+v4.y+v4.z;
o.color = ( abs(def-dotP) < epsilon ) ? red : blue;
return 0;
}
I also tested by passing just the dotP, and it seems like there is a lack of precision. Ie areas with different dotPs are shaded exactly the same.
I did that by doing
o.color = float4( dotP, dotP, dotP, 1);
The interpolated values of texture coordinates are not clamped. Clamping may happen when you use the textures coordinates to sample a texture (depending on the texture wrap setting). It works more like positions than colors. The value can be anything, but it may be “clipped” when it’s used.