Obtaining z and angle of incidence

I have a project where I need to render a huge amount of data that could takes weeks of processing. I got the idea of letting OpenGL do the heavy lifting for me to speed things up. The problem is that I need the angle of incidence and range. I posted a similar question here Angle of incidence

a couple of weeks ago and someone suggested that I assign a unique color to each facet, use glReadPixels to read the color and then do my own ray tracing to obtain the info. That was a great idea and I have that part working.

The trouble I am having now is determining the (x,y) positions on the grid in order to do my ray tracing. I lay out the
grid locations the way I think makes sense and I get a fair number of misses. By that I mean I shoot a ray where I think the (x,y)
position should be towards the facet that OpenGL found and I get a miss, so I end up with a lot of holes.

For a simple example, suppose I am rendering 4x4 pixels and the extents of the model are -2 to +2 in x and y. I define my
ray from the center of each grid position. So in this example for the first and last rows I would use:

X Y

-1.5, -1.5
-0.5, -1.5
0.5, -1.5
1.5, -1.5

-1.5, 1.5
-0.5, 1.5
0.5, 1.5
1.5, 1.5

Does anyone have any idea what might be wrong? Is my assumption about the grid spacing wrong, or does OpenGL lay it
out differently than how I am? I am starting to think that maybe GL isn’t all that exact and is only coming up with
a facet that is “close enough” but not exact, in which case this method may not work.

One other option I have is to find the misses and render those myself, but if I get enough of those it will defeat the
whole purpose of doing this.

Thank you for you help and I hope all of this makes sense.

I noticed in your other post you said you have a cpu version working. Have you looked at porting that code to OpenCL or CUDA. This will give you more control compared to OpenGL which is really just for display where z-depth problems come into play

I have a software version that works. The trouble is that it is slow. I have a scene with over three million facets. To do a 1024x1024 rendering takes 90 seconds, whereas GL does it in an instant.

That’s why I was trying to find a hybrid solution if I can. The idea was to let GL do the rendering and let me know which facets it hit. Once I know the facet then I can do my own software rendering on just that facet. The trouble is that where I think the (x,y) position is seems to be different from what GL thinks it is. Except for the problem with the misses, I have reduced my rendering time from 90 seconds down to about a second - a huge difference.

I’m not sure what CUDA is, so I’ll have to look it up. Thanks.

Oops! I thought you said port it to OpenGL, but you said OpenCL. One concern I have about using the GPU is that I’ve heard that it’s tight on memory. A scene with 3 million facets requires 108 Meg just to hold the data. But I will look into it. Thanks.

Re memory on GPU’s. My gpu has 3 GB and most have 2GB

CUDA is just nVidia’s proprietary version of OpenCL. Both are designed for parallel processing of data. There are ideal for anything that is grid based but action on each cell is independent of others

Re memory on GPU’s. My gpu has 3 GB and most have 2GB
.

Correction: most high end GPUs have 2GB. $100 GPUs tend to only have 1GB.