why glCopyPixels of GL_DEPTH is slower than GL_COLOR

Hi,

in trying to calculate the minimum depth values. OGL_BINARY_COPY is being used… to experiment with the idea that that it is the transfer of data from the graphics memory to host memory that is slow and should be minimized and that because of the GPU parallel pixel processing hardware, the GPU can be very efficient at this type of computation.

anybody has an idea… how to to optimize retrieving depth value from buffer and calculating minimum just on the graphics processor.?

any help, guidance is appreciated…

thanks

:confused:

Interesting problem, heres what “i think” can be done:

Suppose you have a 512x512 depth buffer in a depth texture. Disable all sorts of filtering and put it on a 256x256 sized quad and for each pixel sampled, take the minimum of its 4 neighbouring samples and put it on the destination. Repeat the process, each time halving the texture size until you get a 1x1 sized texture, which will then be your minimal depth value of the original 512x512 depth buffer.

Disclaimer: I have never tested this technique, but i think it should work :slight_smile: .

anybody has an idea… how to to optimize retrieving depth value from buffer and calculating minimum just on the graphics processor.?
Or you can just find the closest vertex to the viewer. Compute it all on the CPU. Simpler, no?

That would definitely be easier for simple geometry scenarios, but if you are, lets say, using some object oriented API and do not have direct information of vertex data of different objects, then it might not be very feasible. Obviously skautia would be the best person to decide on that.

Zulfikar Malik: " Suppose you have a 512x512 depth buffer in a depth texture. Disable all sorts of filtering and put it on a 256x256 sized quad and for each pixel sampled, take the minimum of its 4 neighbouring samples and put it on the destination."

  • Are you referring to programming on the GPU …to take the minimum of 4 neighbouring pixels…? Is there some pre defined function/ way that can be used to do the same without programming GPU …? I’m a beginner at this, so finding it a little difficult…

thanks,