Hi. In my large terrain (say 2M points) I want to move the camera at a speed based on the distance to the closest point. That way I can go fast when flying high or outside of the rounding box, and slowly when I’m near a detail.
I also want mouse-picking. So I decided to render to a FBO with two renderbuffers (screen and objectID) filled from two outputs of the fragment shader.
I would like to put a third output in the shader for the distance for the camera. The problem is that I can’t use a FBO 1x1 size for this distance output because the other two outputs should be the viewport size.
Am I forced to use a different FBO in a second render-pass? Am I missing some easier solution?
Thanks
all attachments of a FBO must have the same resolution, what you could do is to write the distance of all fragments into the 3rd attachment, and later just query the value at the cursor position (or where ever you want to know the distance to the camera position)
if you want to use “defered shading”, you’ll have to write that information anyway into the “Gbuffer”
another possibility is to use “transform feedback”, that allows you to capture values from the vertexshader, e.g. calculate the vertex position in world space, subtract the cameras position and write the length of the difference into a “transform feedback buffer”, later you just have to get the minimum of all these values, or if you are just interesed in the closest vertex, just write the minimum of the current (shader storage) buffer value and the calculated distance (but i’m not sure if you have to use some kind of “atomic operation” to avoid data race [?])
Finding the closest vertex can also be done on CPU, with a good structured data so searching may be fast. Finding the closest fragment (i.e. a point on a triangle or on a line) is not so fast in CPU. Because the fragment shader knows the depth of the fragment, I would like to take profit from that fact: the closest “pixel” has the minimum depth.