Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 9 of 9

Thread: Storing normalized linear depth?

  1. #1
    Intern Newbie
    Join Date
    Feb 2013
    Posts
    39

    Storing normalized linear depth?

    I've written a working deferred renderer. Currently, my g-buffer contains (amongst other things):

    A 24/8 depth/stencil texture, in which is stored the usual post-projective scene depth.
    An R32F eye-space Z value.

    I use the eye-space Z value to reconstruct the original eye-space positions of fragments during
    the lighting pass. It's literally the Z component taken from the eye-space position of the current
    fragment, not normalized or otherwise transformed. I use the stencil buffer to restrict the influence
    of lights (not as an optimization, but as an effect - grouping geometry and only applying lights to
    certain geometry, etc).

    This all works fine, but I sort of dislike that I'm storing two different depth values - it seems like I
    should be able to get away with just one.

    The eye-space depth value seems to be more useful than the post-projective value (as in, I
    typically always want the eye-space depth value, and almost never the post-projective value).
    I keep hearing that some people *only* store (possibly normalized) eye-space depth values
    directly into the actual scene depth buffer (and somehow still have depth testing work correctly),
    but can't seem to find any clear information on how this is achieved. It seems like it'd be far more
    useful to store this value rather than the post-projective value.

    What are my options here? I'm targeting OpenGL 3.1 without extensions, if that makes a difference.

  2. #2
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,123
    Well if you want hardware depth test to work properly, you really need a fixed-function depth buffer. Then in your lighting pass, you can reconstruct eye-space position from this. It's not hard, and there are lots of blog posts and info on how to do this. Matt Petineo for one has a number of blog posts on this. Here's a forum post I made years ago which lists a simple GLSL function you can use to do this (see this post -- search down to PositionFromDepth_DarkPhoton; and it simplifies even further if you're using a symmetric perspective view frustum).

    There are some tricks out there for changing what depth is stored for the purposes of hardware depth testing (websearch logarithmic depth buffer, floating-point depth buffer, a couple notes about z, etc.). But I would try the simple solution first, and then go exploring only if you know you need it.

    Now if you don't want to just save off the fixed function depth, there's nothing stopping you from having a fixed-function depth/stencil buffer used only for G-buffer rasterization and writing out eye-space depth to a G-buffer channel if you want. You'll probably want the fixed-function depth again later for blending on translucents.

  3. #3
    Intern Newbie
    Join Date
    Feb 2013
    Posts
    39
    Quote Originally Posted by Dark Photon View Post
    Well if you want hardware depth test to work properly, you really need a fixed-function depth buffer.
    Ok. This was actually the main issue: Could I store something that isn't a post-projective depth in the depth buffer and expect to still get sane results out of it. From what you've said and from various posts I've seen around the place, the answer is "mostly no".

    Quote Originally Posted by Dark Photon View Post
    Then in your lighting pass, you can reconstruct eye-space position from this. It's not hard, and there are lots of blog posts and info on how to do this. Matt Petineo for one has a number of blog posts on this. Here's a forum post I made years ago which lists a simple GLSL function you can use to do this (see this post -- search down to PositionFromDepth_DarkPhoton; and it simplifies even further if you're using a symmetric perspective view frustum).
    Nice. I've read a lot on position reconstruction, and was actually doing it originally using an inverse projection matrix and some other things, but storing the eye-space Z seemed easier. That function you've written is far, far simpler than any others that I've seen online. I'm impressed! I'll most likely switch to this.

    Quote Originally Posted by Dark Photon View Post
    There are some tricks out there for changing what depth is stored for the purposes of hardware depth testing (websearch logarithmic depth buffer, floating-point depth buffer, a couple notes about z, etc.). But I would try the simple solution first, and then go exploring only if you know you need it.
    Right, I've read most of the stuff I could find online, but they mostly seemed to be about getting more precision out of the depth buffer (using the space available more intelligently) as opposed to storing terms that were more algebraically convenient.

    Quote Originally Posted by Dark Photon View Post
    Now if you don't want to just save off the fixed function depth, there's nothing stopping you from having a fixed-function depth/stencil buffer used only for G-buffer rasterization and writing out eye-space depth to a G-buffer channel if you want. You'll probably want the fixed-function depth again later for blending on translucents.
    Yep, I use depth testing to prevent overdraw in the geometry pass, and then use GL_GREATER depth testing to get the intersections between light volumes and geometry, as CryTek mentioned in one of their papers, and then again for the forward rendering of translucents as you mentioned. So whatever I chose, I'd need to ensure hardware depth testing stayed functional.

  4. #4
    Intern Newbie
    Join Date
    Sep 2014
    Posts
    30
    The only way to properly store eye space depth in the Z buffer is by writing depth in the fragment shader, which disables early Z optimisations. Hardware interpolates Z linearly in screen space (i.e. "noperspective") whereas view space depth is not linear in screen space.

  5. #5
    Intern Newbie
    Join Date
    Feb 2013
    Posts
    39
    Quote Originally Posted by Dark Photon View Post
    PositionFromDepth_DarkPhoton
    I've just realized that this only works if using a perspective transform. Is this expected?

    In the renderer I've developed, I allow the use of ordinary glFrustum style perspective matrices, or glOrtho style orthographic projections, so any position reconstruction method I use needs to be able to cope with both types. Think I may be back to using inverse projection matrices.

  6. #6
    Intern Newbie
    Join Date
    Feb 2013
    Posts
    39
    Hm, just so I'm sure:

    Code :
    eye.z = near * far / ((depth * (far - near)) - far);

    This will reconstruct the eye space Z value regardless of the type of projection, yes?

    I think the answer's yes, but can't prove it.

  7. #7
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,402
    Quote Originally Posted by raga34 View Post
    This will reconstruct the eye space Z value regardless of the type of projection, yes?
    No. Orthographic projections generate a depth value which is proportional to Z. Only perspective projections generate a depth value which is inversely proportional to Z.

    And the projection matrix doesn't have to have been constructed by glOrtho, glFrustum, gluOrtho2D or gluPerspective; it can be any matrix.

    As such, the only robust way to obtain eye coordinates from window coordinates is to first convert to normalized device coordinates using the viewport and depth range, convert to clip coordinates using 1/gl_FragCoord.w (if you don't have that value, you're out of luck), then pre-multiplying by the inverse of the projection matrix.

  8. #8
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,123
    Quote Originally Posted by raga34 View Post
    I've just realized that this [PositionFromDepth_DarkPhoton()] only works if using a perspective transform. Is this expected?
    Yes. I mentioned that in that forum post. If you want one for Orthographic, it's very simple -- no non-linear funny-business going on -- just a bunch of axis-independent scales and offsets really. I know you can whip this up yourself pretty quickly.

  9. #9
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,123
    Quote Originally Posted by raga34 View Post
    Hm, just so I'm sure:

    Code :
    eye.z = near * far / ((depth * (far - near)) - far);

    This will reconstruct the eye space Z value regardless of the type of projection, yes?

    I think the answer's yes, but can't prove it.
    No, it's for perspective only. For orthographic, it's much simpler -- it's all linear mappings.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •