Pixel-accurate rendering

For a Computer Vision application (silhouette-based 3D reconstruction), I want to render a given set of vertices from a certain viewing angle and than inspect the rendered image afterwards. Therefore the pixel locations of the rendered vertices should be highly accurate. But in my tests, the OpenGl driver (or what ever component it is) introduces for about 0.4% of all vertices a deviation of approximately one pixel?!

The code of my very simplified test (orthogonal projection onto x-y-plane) I have used is attached to this message. It builds on Ubuntu Linux 11.04 with

g++ OpenGlAcurateRenderingTest.cpp -o OpenGlAcurateRenderingTest -lglut -lGLEW -lGLU

The test code contains also a comparison with gluProject(), which produces also different results as OpenGl.

If anybody has an idea, I would strongly appreciate it!

Best wishes,
Philipp

But in my tests, the OpenGl driver (or what ever component it is) introduces for about 0.4% of all vertices a deviation of approximately one pixel?!

By what means did you determine this error? Did you look at the specification to see how OpenGL positions become pixel values?

Hi Alfonse,

>
> By what means did you determine this error?
>

For a set of randomly generated vertices, I rendered one time one vertex with OpenGl on the GPU (and read back the framebuffer). And than I calculated where I would expect that vertex to be rendered. Than I compared.

>
> Did you look at the specification to see how OpenGL
> positions become pixel values?
>

yes, I did look into the specification (verions 3.3 since this is my OpenGl version). There, I found many details, e.g. how thick a point will be rendered and that the conversion from floating numbers to integers is by truncating. But I could not find the explicit calculation of pixel locations from vertices. On the other side, the documentation of gluProject() gives a formula. Both is integrated into my test program.

By the way, I have refined the test program (see the attached ZIP). And I tested on three other computers (with the same software configuration): The OpenGl/GPU rendered results slightly different among them. This brings me to the conclusion, that the float-vertex to integer-pixel calculation is probably not accurately well defined and subject to hardware dependent optimizations.

Is there anybody with similar or opposing experiences?