bsplines

Does anybody know of a fast way to find the closest point on a B-Spline to a given point. I came up with a solution that used newton-raphson, but for some curve setups it divergences and doesn’t compute the closest point. I can’t find any code that does this on the net. The NURBS++ library finds the closest point in the same way, and sometimes fails. I’d like to avoid tessellating the curve, and checking hundreds of points if possible.
Thanks for any ideas you might have,

Craig

  1. Crude tesselation to get a better first guess.

  2. Try Laguerre’s method instead of Newton.

I’m assuming that the only problem was finding the root, since you say you had a solution using a root, but Newton-Raphson blows up, presumably because you hit a maxima/minima and the tangent streaks away to infinity and beyond…

The cononical root-finding method for real roots in the libraries are all variants of Brent-Dekker, which use combinations of bisection and Newton to guarantee finding a root if it’s on an interval.They’re massively complicated and rather large, but they’re written and working.

See Numerical Recipes for one variant of the algorithm or for that matter any numerical analysis book at least treats the subject.

I don’t know if it will guarantee convergence in this case, but an even faster method than Newton-Raphson that converges more generally, is called Linear Finite Transformation (LFT)
You fit a rational:

y = (ax + b)/(cx+d)

which simplifies to:
y = (x + b)/(cx+d)

and then

y = (x-b)/(cx+d)

so, with x1,y1, x2,y2:

cx1y1+dy1 = x1-b
cx2y2+dy2 = x2-b
cx3y3+dy3 = x3-b

deltax1y1 = x2y2-x1y1
deltax2y2 = x3y3-x2y2
deltay1 = y2-y1
deltax1 = x2-x1
deltay2 = y3-y2
deltax2 = x3-x2

c deltax1y1 + d deltay1 = deltax1
c deltax2y2 + d deltay2 = deltax2

c = det(deltax1, deltay1, deltax2,deltay2) /
det(deltax1y1, deltay1, deltax2y2, deltay2)

d = det(deltax1y1, deltax1, deltax2y2, deltax2) /
same denom

b = x1 - c x1 y1 - dy1

x1<-x2
x2<-x3
x3<-b (your new guess)

This may seem complicated compared to Newton, but a lot of this boils away/doesn’t change every iteration. It doesn’t require a derivative. It’s perhaps 50% more costly to execute per iteration for a typical polynomial than Newton-Raphson, but it converges quicker, and under a much wider range of circumstances.

This gem is very little known, but was mentioned in passing at a course I sat in on in numerical methods by Professor Roger Pinkham at Stevens Institute of Technology. I don’t know where he got it, but his depth of knowledge of numerical dirty tricks is astounding. I’d recommend specifically his buddy Hamming’s book (Dover Press, “Numerical Methods for Scientists and Engineers”) which has a lot packed into a pretty cheap, tight book.

I think I have this written in c or matlab, and can dig it out if you want to try it out, just mail me.

cheers,
Dov