Yandersen

04-13-2015, 11:02 AM

Gentlemen, I want to share the new (at least to my knowledge) technique with you. :)

As we know, rendering large scenes had always been tricky: if near clipping plane set too close, depth-fighting for far objects becomes noticeable; if zNear is pushed away, then objects at near are partially culled. There are ways to work it around, but they all come at some expense.

But now, with ARB_clip_control in OpenGL4.5 core we can finally simulate a camera with truly real properties: with no far clipping plane (infinite drawing distance) and very small zNear. It will let us fit a simulated camera object in an any tiny hole in the scene and let it capture the whole world around without any artifacts:

1745

On the picture the far mountains are ~60km away while the camera is sitting almost at the ground level and has only 1mm distance to the near clipping plane. The demo project can be downloaded here (https://drive.google.com/open?id=0B5M-5LxIj9ZDcDIzVUNTd19tdTg&authuser=0).

So here is the way to make those drawing properties possible.

We need a floating point depth buffer, therefore we need a FBO and can no longer render directly to the window (but who does nowadays, right?).

Then we need to change the mapping of depth values from default [-1...1] to required [0...1]:

glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE);

By doing so we keep the Zndc coordinate in [-1, 1] range from being mapped to the [0, 1] range in window space. This is crucial, because scaling by 0.5 and adding 0.5 causes loss of precision for any values of near-zero ranges; therefore, 1e-10 and 2e-10 will both result in the same value in depth buffer. As we will see later on, those tiny values will form the majority of the depth buffer contents, so we need to tolerate them.

Now the core of the idea: the projection matrix. It has to be constructed in some unusual way:

| f/aspect 0 0 0 |

| |

| 0 f 0 0 |

P = | |

| 0 0 0 zNear |

| |

| 0 0 -1 0 |

here:

f = ctan(ViewAngleVertical/2)

aspect = Viewport.x / Viewport.y;

zNear: distance to the near clipping plane.

Such projection matrix results in a reversed depth range: the further the object, the smaller the depth values of it's fragments. For most objects of the scene the gl_FragCoord.z will look like XXXe-YYY. But as long as the number fits into the range the float number can represent - we still have 23 bits of mantissa's precision (before underflow). Therefore, the depth testing has to be set like this:

glEnable(GL_DEPTH_TEST);

glDepthFunc(GL_GEQUAL);

It should be mentioned, though, that the precision changes with distance, therefore the depth values of adjacent surfaces may become equal at a certain distance. But this error increases linearly with the distance, just like level-of-detalization for objects should, ideally. In any case, there are no doubts that the technique has superior advantage over the conventional camera setup and can compete with w-buffer technique of DirectX, being an OpenGL alternative to that.

As we know, rendering large scenes had always been tricky: if near clipping plane set too close, depth-fighting for far objects becomes noticeable; if zNear is pushed away, then objects at near are partially culled. There are ways to work it around, but they all come at some expense.

But now, with ARB_clip_control in OpenGL4.5 core we can finally simulate a camera with truly real properties: with no far clipping plane (infinite drawing distance) and very small zNear. It will let us fit a simulated camera object in an any tiny hole in the scene and let it capture the whole world around without any artifacts:

1745

On the picture the far mountains are ~60km away while the camera is sitting almost at the ground level and has only 1mm distance to the near clipping plane. The demo project can be downloaded here (https://drive.google.com/open?id=0B5M-5LxIj9ZDcDIzVUNTd19tdTg&authuser=0).

So here is the way to make those drawing properties possible.

We need a floating point depth buffer, therefore we need a FBO and can no longer render directly to the window (but who does nowadays, right?).

Then we need to change the mapping of depth values from default [-1...1] to required [0...1]:

glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE);

By doing so we keep the Zndc coordinate in [-1, 1] range from being mapped to the [0, 1] range in window space. This is crucial, because scaling by 0.5 and adding 0.5 causes loss of precision for any values of near-zero ranges; therefore, 1e-10 and 2e-10 will both result in the same value in depth buffer. As we will see later on, those tiny values will form the majority of the depth buffer contents, so we need to tolerate them.

Now the core of the idea: the projection matrix. It has to be constructed in some unusual way:

| f/aspect 0 0 0 |

| |

| 0 f 0 0 |

P = | |

| 0 0 0 zNear |

| |

| 0 0 -1 0 |

here:

f = ctan(ViewAngleVertical/2)

aspect = Viewport.x / Viewport.y;

zNear: distance to the near clipping plane.

Such projection matrix results in a reversed depth range: the further the object, the smaller the depth values of it's fragments. For most objects of the scene the gl_FragCoord.z will look like XXXe-YYY. But as long as the number fits into the range the float number can represent - we still have 23 bits of mantissa's precision (before underflow). Therefore, the depth testing has to be set like this:

glEnable(GL_DEPTH_TEST);

glDepthFunc(GL_GEQUAL);

It should be mentioned, though, that the precision changes with distance, therefore the depth values of adjacent surfaces may become equal at a certain distance. But this error increases linearly with the distance, just like level-of-detalization for objects should, ideally. In any case, there are no doubts that the technique has superior advantage over the conventional camera setup and can compete with w-buffer technique of DirectX, being an OpenGL alternative to that.