PDA

View Full Version : Fast Distance Algorithm

rilestyle
12-05-2003, 02:32 AM
Hi, I am trying to optimize a particle system that uses several distance equation calculations for each particle.

sqrt(pow(a,2)+pow(b,2))

I am wondering if it possible to get a relatively similar result without using the square root function. The distance function doesn't have to be exactly accurate, only close.

Bob
12-05-2003, 03:15 AM
Use a*a and b*b instead of using pow(). And use the distance squared instead if you can. Sorting by distance for example, the actual distance is not needed, the distance squared will work.

Christian SchÃ¼ler
12-05-2003, 04:34 AM
Some people approximate with max( a, b ). This simulates the behaviour that if one of the components is large, the other has little impact.

nutball
12-05-2003, 05:34 AM
There's a quick-and-dirty approximate distance formula in one of the Graphics Gems books. They are at home and I'm at work right now, so I can't post it right now http://www.opengl.org/discussion_boards/ubb/smile.gif

Hampel
12-05-2003, 07:06 AM
http://www.flipcode.com/articles/article_fastdistance.shtml

arekkusu
12-05-2003, 03:09 PM
Some processors (e.g. PPC) have fast reciprocal sqrt estimate instructions. So you can replace:

float factor = a*a + b*b;
float norm = 1.0f/sqrt (factor);

with:

float factor = a*a + b*b;
float norm = __frsqrte(factor);

which is thousands of times faster at the cost of precision.

You can improve the precision cheaply by using the Newton-Raphson method:

norm *= (1.5f - (0.5f*factor * norm * norm));

typically one iteration is enough for on-screen graphics precision.

mds47
12-25-2003, 11:54 AM
you could use the manhattan (aka city block distance). this looks like:

dist = abs(a) + abs(b)

which looks like this from the x

3
3 2 3
3 2 1 2 3
3 2 1 x 1 2 3
3 2 1 2 3
3 2 3
3

Michael

deshfrudu
01-06-2004, 11:48 AM
For what it's worth...

I've written just about every kind of sqrt approximation under the sun over the last 10 years or so. My favorite is forcibly converting an already-floating-point value using int-to-float, which is essentially the same as taking the log to base 2. Multiply by a power, add in a "magic number", and convert back to float and you've basically got a good approximation of raising to an arbitrary float power very cheaply. For sqrt, you'd use a power of 0.5.

These days when I need a sqrt, I just use plain ol' sqrt(), being careful that the compiler optimizations are enabled to translate this into the fpu's sqrt instruction instead of emulating. The sqrt is relatively expensive as a single instruction, but it's almost always better than 4 or 5 simpler instructions, especially if there are memory fetches involved, and it will eat up fewer registers. Except for the most rudimentary approximations, the full-precision CPU instruction will be faster. When you consider that you're just going to be bottlenecked by cache fills anyway, why bother.

So, long story short, sqrt(a*a + b*b) will be about as fast as it gets assuming math optimizations are properly enabled.

As previously mentioned, many times sqrt isn't needed. Gravity, for example, is proportional to distance^2, and sqrt(a*a + b*b)^2 == a*a + b*b.

Correction: gravity is *inversely* proportional to distance^2.

[This message has been edited by deshfrudu (edited 01-06-2004).]

Obli
01-07-2004, 10:34 AM
Originally posted by deshfrudu:
When you consider that you're just going to be bottlenecked by cache fills anyway, why bother.
[This message has been edited by deshfrudu (edited 01-06-2004).]

I am somewhat shocked each time I read it.

Anyway, I would just say that if I am not wrong, SSE has a packed float square root and packed float inverse square root instruction. Maybe you can pull out something for that, provided you want to use SIMD of curse.