raspberry pi, quats and opengl

Does anybody know whether raspberry PI can be successfully/efficiently programed with higher order math such as quaternions and the GLFW for example that could help support it?
Anyone know whether basic glut functionality can exist on the Rpi yet stably?

Re GLFW and GLUT, have you websearched them? (Link,Link,Link).

And quaternions aren’t higher order math. They’re not complex, and don’t require any special hardware support. They’re just not taught to everyone by default. If you want to use quaternions, go for it.

[QUOTE=Dark Photon;1289426]Re GLFW and GLUT, have you websearched them? (Link,Link,Link).

And quaternions aren’t higher order math. They’re not complex, and don’t require any special hardware support. They’re just not taught to everyone by default. If you want to use quaternions, go for it.[/QUOTE]

OMG thank you for your response! Looks like the OpenGL experimental driver has a better frame rate at the moment. Stuff is so cool.

I guess I am guilty of

assuming
that the ‘mystery’ around the quaternions indicated complexity. You’ve renewed my interest in this angle. Im glad I put that out there now. I’ve seen a lot written going back and forth about Euler angles, creeping inaccuracy, etc. The real question is whether something lke the Raspberry pi can support Quats, though, since its a ‘mystery’ to me in the first place, and would not know. I think though that the trig approach to an FPS cam for example, would use lower horsepower in the RPi for example.

I think though that the trig approach to an FPS cam for example, would use lower horsepower in the RPi for example.

The performance of your application is not going to be driven by the performance of the camera management code.

Inefficient c++ code which may or may not be augmented with openCv results in lower FPS. I’ve seen it myself. I don’t think I understand what you are saying.

If your camera code is anything more complex than a relative handful of floating point operations per frame then you’re doing something seriously wrong. Eventually you’re going to need to convert it to a matrix, load it to the GPU and start doing some matrix multiplications. It doesn’t matter whether your matrix was sourced from euler angles, quaternions or random numbers, the cost of those matrix multiplications will be the same. So in other words this is not a performance optimization.

What I’m saying is that if you have a task that takes 10 seconds to complete followed by a task that takes 50000 seconds to complete, and you make the 10 second task take 5 seconds, the improvement in your overall efficiency is a rounding error.

Rendering is the 50,000 second task. Computing the camera matrix is that 10 second task. And the relative cost of rendering to camera computation for a frame is probably greater than the 5,000:1 ratio I used here.

Remember the 80:20 rule: 80% of your program’s time will be spent in 20% of its code. The computation of the camera matrix is not that 20 percent. Even on ARM CPUs; manual floating-point math isn’t that bad. Premature optimization and all that.

Now, I might be concerned about other CPU-based floating-point operations, when dealing with a CPU that has to emulate floating-point math. But these would be operations you have to do once for each object you render, not once-per-frame kinds of things like the camera.

After all, if you have to do the 10 second task 100 times, its cost becomes significant.