I have a project where I need to render a huge amount of data that could takes weeks of processing. I got the idea of letting OpenGL do the heavy lifting for me to speed things up. The problem is that I need the angle of incidence and range. I posted a similar question here Angle of incidence
a couple of weeks ago and someone suggested that I assign a unique color to each facet, use glReadPixels to read the color and then do my own ray tracing to obtain the info. That was a great idea and I have that part working.
The trouble I am having now is determining the (x,y) positions on the grid in order to do my ray tracing. I lay out the
grid locations the way I think makes sense and I get a fair number of misses. By that I mean I shoot a ray where I think the (x,y)
position should be towards the facet that OpenGL found and I get a miss, so I end up with a lot of holes.
For a simple example, suppose I am rendering 4x4 pixels and the extents of the model are -2 to +2 in x and y. I define my
ray from the center of each grid position. So in this example for the first and last rows I would use:
X Y
-1.5, -1.5
-0.5, -1.5
0.5, -1.5
1.5, -1.5
-1.5, 1.5
-0.5, 1.5
0.5, 1.5
1.5, 1.5
Does anyone have any idea what might be wrong? Is my assumption about the grid spacing wrong, or does OpenGL lay it
out differently than how I am? I am starting to think that maybe GL isn’t all that exact and is only coming up with
a facet that is “close enough” but not exact, in which case this method may not work.
One other option I have is to find the misses and render those myself, but if I get enough of those it will defeat the
whole purpose of doing this.
Thank you for you help and I hope all of this makes sense.