Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: Freaky gl_FragDepth

Hybrid View

  1. #1
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    590

    Freaky gl_FragDepth

    This I admit is a peculiar sounding suggestion but I have use cases for it. The basic idea is this:
    • depth test is performed with gl_FragCoord.z, i.e. the depth value determined from rasterization
    • BUT the depth value written to the depth buffer is gl_FragDepth


    Using this, one can have the depth test as GL_LESS, and yet after a draw, some values of the
    depth buffer get larger. I admit that, again, this is odd, but I do have use cases.

    Emulating this behavior with GL_ARB_shader_image_load_store by excessive use of memoryBarrier()
    is a bad idea, just as what happens in trying to use it to emulate GL_EXT_shader_framebuffer_fetch of GLES land.

  2. #2
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    This has been asked for before, the ability to lie to the depth test by forcing it to test against one value, then write a different value if it passes.

    It's important to note that the explicit early depth test feature that the ARB added with shader_image_load_store explicitly forbids this. They could have allowed it very easily, since they had to add specific language to say that it doesn't work, that the depth value that gets tested will be the depth value that gets written. But they explicitly put the language in there to stop this exact thing from working.

    There's probably a good reason for that.

  3. #3
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    590
    The blatantly obvious thing is the following:

    1) if a shader has early z-test on, then all writes to gl_FragDepth are ignored (as is currently)
    2) if "freaky depth is on", then fragment write happens if and only if the rasterizer produces depth passes

    In particular, if shader has early z-test on, then regardless if freaky depth is on, the value written to gl_FragDepth is ignored. That is consistent with the current situation anyways.

  4. #4
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    You're missing the point. Your "freaky depth is on" is nothing more than "early-z + write whatever gl_FragDepth says". That would provide 100% of the functionality you're asking for.

    The ARB had the opportunity to provide exactly this functionality. And yet, they explicitly forbid it. That's a pretty good indication that what you want just isn't possible with current hardware. Or at the very least, is something that all the IHVs agreed was a Bad Thing.

    Like I said, there's probably a good reason why they didn't allow this.

  5. #5
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    590
    I am not claiming that this is for current GL4 hardware, but for "next version of GL" which could mean for GL5.

    This is not just early z + write whatever to gl_FragDepth. I really want the depth test to be against the value from the rasterizer, but the depth buffer updated to a different value that may or may not pass the depth test.

    I suspect why it was not done is simple: the current GL implementations have that changing the Z forces the whole fragment check thing, i.e. hardwired into current hardware. As to why, I think the reason is barbarically simple: no version of D3D has this feature or anything really like it, so no hardware has the feature. Almost all of features in GL are found in D3D first OR extensions to what D3D requires that were quite easy to tack on. Other GL features that are not core, ARB or EXT, are then particulars of specific hardware (how I love thee, GL_NV_shader_buffer_load/store).

    But now, the grapevine is like, there will be no D3D12, so.... makes me wonder...

  6. #6
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    I am not claiming that this is for current GL4 hardware, but for "next version of GL" which could mean for GL5.
    There's been no suggestion that I'm aware of from the IHVs that there's going to be a new generation of hardware coming out soon. At least, not a new generation that offers any significant functionality differences.

    This is not just early z + write whatever to gl_FragDepth. I really want the depth test to be against the value from the rasterizer, but the depth buffer updated to a different value that may or may not pass the depth test.
    That's exactly what that would be. The early depth test tests against gl_FragCoord.z. If it passes, the fragment shader executes. And if gl_FragDepth is honored after the test, then the value written to the Z buffer will necessarily be the changed value.

    How is that not what you're asking for?

    As to why, I think the reason is barbarically simple: no version of D3D has this feature or anything really like it, so no hardware has the feature.
    I think you have that backwards. Microsoft doesn't dictate from on high what goes into D3D without consultation with IHVs. They get together, and Microsoft probably pushes for things. But stuff doesn't go into D3D unless the IHVs agree to it. Just look at the D3D10 debacle as evidence of that.

    Originally, a form of tessellation was going to be in D3D10, which is why AMD_vertex_shader_tessellator exists. Note that this is a pre-vertex shader tessellation stage. But NVIDIA didn't want to do it. Maybe for good reason, maybe not. But because of that, Microsoft couldn't put it in. This also is what led to D3D10.1, which was just D3D10 with minor bits of stuff. Minor bits of stuff that notably NVIDIA did not implement until their D3D11 hardware, while virtually all AMD hardware was D3D10.1 capable.

    So I don't think it's that D3D doesn't have the feature. It's more likely that the IHVs don't want the feature, and that's why D3D doesn't have it.

    But now, the grapevine is like, there will be no D3D12, so.... makes me wonder...
    What grapevine is that exactly?

    Also, that's not terribly surprising. With shader_image_load_store, there's really just not very much left to add. Oh sure, you might want blending in shaders, but unless there's some specialized hardware for it, you can cover that with load/store.

    The most you might get is some form of streaming textures or whatever.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •