Hyperbolic interpolation with -w coordinate question

I’ve been teaching myself OpenGL by developing my own software implementation of the API. Currently I am stuck on a problem related to non-perspective correct texture mapping.

The problem occurs in my clipper when I am trying to interpolate texture coordinates along an edge that stretches across the near frustum clip plane and into -w coordinate space. What I require of the clipper is to perform a hyperbolic interpolation to determine the texture and color values of the point clipped against the near frustum clip plane.

I developed the following algorithm to calculate the hyperbolic interpolation factor ftHyper that can be used for interpolating texture and color values for non-perspective correct rendering. It works in all circumstances except for edges crossing into the -w coordinate space where ftHyper is no longer correct:

/////////////////////////////////////////////////////////////////////
///  pDest    is the vertex we will be interpolating to
///  pStart   is the vertex we will be interpolating from
///  pEnd     is the vertex at the end point of the edge
///  ftLinear is the linear interpolation factor
///  ftHyper  is the hyperbolic interpolation factor that we must find
void DetermineHyperbolic_t(PVERTEX pDest, const PVERTEX pStart, const PVERTEX pEnd, float ftLinear, float & ftHyper)
{   
    //  This function operates in eye-space coordinates

    float numerator, denominator;
    
    denominator = pDest->position.v[3] * (pEnd->position.v[3] - pStart->position.v[3]);
       
    // If the denominator is 0.0 exactly then the linear factor t and hyperbolic factor t
    // are equal. If the denominator is very near 0.0 then to avoid precision issues we
    // can also approximate the hyperbolic factor t with the linear factor. 
    if (fabsf(denominator) < 0.0001f) {           
        ftHyper = ftLinear;
    }
    else {        
        numerator = pEnd->position.v[3]  * (pDest->position.v[3] - pStart->position.v[3]);                
        ftHyper = numerator / denominator;        
    }
}

Unfortunately my mathematical understanding of 3D homogenous coordinate spaces is severely lacking right now (up until 4 months ago I had never even heard of homogenous coordinates). I simply can’t see how to fix this algorithm from breaking when there is -w coordinate. I have been considering trying to determine ftHyper from the x,y, and z coordinates of the vertex instead of w but I see this solution as ugly and would like to keep my calculations to simply the w coordinate.

If anyone could provide some insight on why my algorithm is failing and to perhaps suggest a solution it would be greatly appreciated.

Thank you,
-Toasty

I am now also doing the same job as you do, and I also have some problem with perspective correction. I’ve read the paper “Mipmap level selection for texture mapping”, and it shows that we can use the function to fine the corrected u,v value by: s = S/Q, t = T/Q, where S, T, Q are all plane functions.

I think that
S plane : Ax + By + Cs + D = 0 where s = (u / z)
T plane : Ax + By + Ct + D = 0 where t = (v / z)
Q plane : Ax + By + Cq + D = 0 where q = (1 / z)
(For every plane function have different A,B,C,D)

There are three points in a triangle and we can use them to solve the three plane function. For every point in the triangle, we can use its (x, y)value to get its corrected (u, v) value.

Is what I think right?

But my implement is wrong…is there any wrong?

Thanks.

Here’s an excellent paper on the subject
http://portal.acm.org/citation.cfm?id=617770&dl=ACM&coll=portal

All paper show that interpolate the three value: u/w, v/w, and 1/w. Then u = (u/w) / (1/w). I still have some problem as follows:

if there are five points in a line(But only knows the first and the end point value):

u = 2, x, x, x, 10
w = 1, x, x, x, 5

u/w = 2, x, x, x, 2
1/w = 1, x, x, x, 0.2
interpolate u/w and 1/w =>

u/w = 2, 2, 2, 2, 2
1/w = 1, 0.8, 0.6, 0.4, 0.2

So…the new u = (u/w) / (1/w) =>

u = 2, 2.5, 3.3, 5, 10

Isn’t the five value of u strange? The first four point’s u is too close and the last one different too large.

Is my thought wrong?

Well, your w varies that way across the line which obviously gives that result. If w was constant 1 over the line (as it would be for regular vertices or non projected tex coords) you’d get the expected linear ramp of values.

EDIT: To the OP, you will probably find the OpenGL smaple implementation source code, the MEsaGL source and swshader.sourceforge.net helpful in building your own software renderer. Also check out http://www.d6.com/users/checker/misctech.htm (specifically the texture mapping articles)

@Poma
You should have an intuition about this. Remember that the w coordinate is essentially the eye space z. The larger the z, the higher the frequence of tiling. Imagine standing on a checker board floor; in pixel space, the tiling frequency increses as the floor extends into the distance, as one would expect.

Thx, and does this mean that what i think is right? But now i have an example:
There is a triangle and the three points’ data:

A (x,y,z) = (-160,80,-200), (u,v) = (0,1)
B (x,y,z) = (-160,-80,-200), (u,v) = (0,0)
C (x,y,z) = (160,-80,-400), (u,v) = (1,0)

The eye coordinate isn’t change and after perspective the three points’ screen coordinate are:

A (x,y) = (56,117)
B (x,y) = (56,202)
C (x,y) = (151,181)

So, the w of each point are 200,200,400?
and
Q plane:
A (x,y,1/w) = (56,117,1/200)
B (x,y,1/w) = (56,202,1/200)
C (x,y,1/w) = (56,181,1/400)

S plane
A (x,y,u/w) = (56,117,0/200)
B (x,y,u/w) = (56,202,0/200)
C (x,y,u/w) = (56,181,1/400)

T plane
A (x,y,v/w) = (56,117,1/200)
B (x,y,v/w) = (56,202,0/200)
C (x,y,v/w) = (56,181,0/400)

Use the three points can calculate the three plane functions.

For every pixel point in this triangle, we can take its (x,y) into the three plane functions and then get its (u,v) value. Is that right?

But the implement that i do is not right. If the above is right, maybe what i implement is wrong.