I'm currently working on updating/optimizing an old simulation. This simulation creates a random constellation of points in 2D screen coordinates and then projects the points back into 3D using the depth buffer as the Z dimension. Then the 3D dots are rendered in left/right stereo thus making the scene invisible monocularly but visible stereoscopically. Here is the basic outline of the draw loop:The program works fine, but the performance drops when we render a lot of points (over 1000)Code :Render 3d Scene for numPoints generate random screen coordinate use glReadPixels to determine value of depth buffer at that coordinate use gluUnProject to determine 3d position of point based on screen coordinate and depth value glClear(GL_COLOR_BUFFER_BIT); Render 3D points in stereo
I'm thinking that glReadPixels is the bottleneck. I will try to call it once and read in the entire depth buffer, but I'm not sure how much that will help.
I don't have much experience with vertex/fragment programs and was wondering if it would be possible to utilize them to gain performance. From doing a little research it seems as though reading from the depth buffer is not possible within a shader program. Is this correct? Are there any other opitmizations that can be done?
Thanks in advance