Pixel spacing in Ortho view

How does Open GL set the pixel spacing in Orthographic view? For example, let’s say I am rendering at 2x2 rows and columns and the extents are -1 to +1. To me the (x,y) render coordinates should be (-0.5, -0.5), (-0.5, 0.5), (0.5, -0,5) and (0,5,0.5), which is right in the middle of each grid location. But maybe Open GL doesn’t do it that way, maybe it renders at the corners or something else.

Here is the long version of my question, because maybe I can’t do what I’m trying, or maybe there is a better way. I currently render a scene with millions of facets using my own software program, which can be slow. In order to speed it up I came up with a hybrid approach where I assign each facet a unique color and render the scene in Open GL in Ortho view. I then read back the color buffer for each pixel location. By using the color information I can determine which facet was hit, at which point I perform my own rendering against that facet. In other words, I am trying to get GL to do the hard part and elminate facets for me.

My problem is that I sometimes get misses when I render the facet. For example, if the color comes back red and I know that is facet 10, I render facet 10 at what I think is the proper (x,y) point and I miss the facet. So this tells me that either the (x,y) point that GL uses is different than mine, or that the results from GL are only “close enough”, in which case I can’t use this approach.

I suppose the right answer is that I would be better off using CUDA, which I probably will eventually do, but that will be a long learning curve. In the meantime, I thought this might be a quick and dirtly approach.

Thank you for your help,

Jim

OpenGL doesn’t actually have any such thing as an “ortho view”. The way it works is that it takes an input position, that’s transformed by some arbitrary matrix, giving an output position. OpenGL doesn’t care or distinguish if it’s an orthographic or perspective projection (or even if it’s “my wacky projection version 27”); it’s just a matrix multiplication and the mathematics are the very same either way.

Furthermore, every OpenGL specification warns that OpenGL is not pixel-exact; this is the first sentence of the “Invariance” section of the OpenGL spec.

So your second conclusion is the answer: the rendering is “good enough” for a visual representation, and it’s important to remember that since OpenGL is a graphics API, so far as it’s concerned that’s mission accomplished.

To make it worse for your requirement, even if it did give accurate results, you’ve no guarantee that they’re going to stay accurate and consistent across different hardware vendors, different hardware generations from the same vendor, or even different driver revisions with everything else being equal.

That’s not to be negative about OpenGL - the “GL” stands for “Graphics Library” and drawing graphics is it’s job. For more general purpose computation you really are better off looking at other solutions.

Thank you, mhagain. It was starting to appear that might be the case. You have to admit, it was a cool idea, though…

I just wanted to do a followup in case this might be useful to someone else. In the original question I stated that I was trying to do rendering in software but let OpenGL do the heavy lifting for me, but found that it wasn’t always accurate. I had another project for modeling a ladar sensor where some misses was “good enough”, so I decided to go ahead and pursue this route.

I found after doing some studies that OpenGL was providing results that were better than 99% accurate. The results depended on the scene, resolution and viewpoint, but for one test at 256x256 there were 219 errors, which provides an accuracy of 99.7%. For another scene at 1024x1024 resolution there were 1006 errors, which is 99.9% accurate.

During this test I found that there are two types of errors. The first type, which was discussed in the original post, is where OpenGL returns a facet but that facet is not in the line of site. For example, if the returned pixel color is red and I know that corresponds to facet 10, when I render that facet the ray misses. This accounts for maybe half of the errors. The second type of error is when the facet is actually in the line of site, but it is still not the correct facet because it sits behind another facet that is closer to the viewer. This type of error is actually more serious because it can’t be detected at run-time. The first type we simply mark as a miss even if it does generate “holes”. In our case we simply treat these innacuracies as sensor noise.

I implemented this by creating a popup window and had GL render the facets for me. I turned off double buffering and for my purposes, if I was in debug mode I made the window visible, and in release mode I made the window invisible. In the case of a flyby, I created one window and kept rendering to the same window. A better approach probably would have been some sort of memory buffer (a pbuffer maybe?), but this way worked.

In conclusion, if you don’t need 100% accuarcy, this provided a quick and dirty way to greatly speed up rendering in software without having to resort to CUDA. I successfully rendered scenes up to 4096x4096 resolution, but never tried anything higher. So maybe this will help someone.