Does anybody know why the authors of GLU decided to only support doubles for methods such as gluProject, gluUnproject, and gluTessVertex? I find it odd since GL is type friendly in almost every other part of the API, and most systems are optimized for floats. To this point, I have left my data as doubles, but have been persuaded to change them to floats to maintain compatibility with some legacy code. I realize I can write floating point versions. I am just curious about the design decision.