Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 2 FirstFirst 12
Results 11 to 15 of 15

Thread: Draw order for a game renderer

  1. #11
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    I should correct myself: In GL 4.3 (and I'm pretty certain in 4.2 as well), early fragment tests have to be explicitly enabled to really happen before fragment shader execution. This includes per-fragment depth tests. Otherwise the tests will happen after execution of the fragment shader.

  2. #12
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    I should correct myself: In GL 4.3 (and I'm pretty certain in 4.2 as well), early fragment tests have to be explicitly enabled to really happen before fragment shader execution.
    No. The implementation is allowed to have early fragment tests, but only so long as the implementation detects that it would get the answer from late fragment tests. That is, if it wouldn't matter when the test happens, then the implementation can freely do the test first.

    The explicit early fragment test setting is for those times when you need to force OpenGL to do the test first when it might not otherwise do so. Before image load/store, fragment shader writing was based purely on its outputs. So the OpenGL spec could say that the depth test happens after the fragment shader, but an implementation can optimize it to be early in certain cases where you can't tell the difference. The only time this mattered before was when you wrote to `gl_FragDepth`, and the reason for that moving the fragment test to late is obvious.

    Once you have arbitrary image and buffer load/stores, you effectively prevent an implementation from employing the early depth test optimization. At that point, you need an explicit setting, because you need to change observeable behavior: you want a depth test to stop the fragment shader from writing things to images/buffers.

    So you only need to explicitly specify the fragment test if you're doing image load/stores.

  3. #13
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    Is there any mention of it in the spec? 14.9 and 15.2.4 in the GL 4.3 core spec and the small section in 4.4.1.3 in the GLSL spec don't mention anything about that. That's the placed I looked to confirm my argument because I thought 3 places would be enough to mention such a detail.

  4. #14
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Is there any mention of it in the spec?
    There doesn't need to be. The specification defines apparent behavior. If running the depth test before the fragment shader won't affect anything you see, then it's OK for the implementation to do so. That's how implementations get away with guard-band clipping; the spec says that partially-visible triangles need to be clipped into smaller ones, but as long as you can't tell the difference, it doesn't matter if the implementation uses a short-cut.

    Thus, as long as everything appears as though the depth test came after the fragment shader, then the implementation is fine. That's why it's turned off when the fragment shader writes to the fragment depth; because the appearance still needs to match what the spec says.

  5. #15
    Member Regular Contributor
    Join Date
    Jan 2005
    Location
    USA
    Posts
    411
    So back to the top post, how important is the order of sorting things... if you are not trying to "push it to the limit"? And what is the ideal order. If there is a consensus.

    I ask because I have a desktop organization that sees the graphics firstly as instances, so each graphic is like a spreadsheet so that all of the instances can be rendered simultaneously. Within the graphic the pieces are sorted by material, but it would be kind of annoying to to try to cross reference them by texture instead. I figure a standard graphics card can keep a few textures ready to go, but I do not know. Sorting the graphics weakly by texture overlap would not be a problem.

    And then I have per material shaders basically. So shaders are associated with textures. So a change of texture might mean a change of programs too. Does any of that sound like something worth wasting time or losing any sleep over?
    God have mercy on the soul that wanted hard decimal points and pure ctor conversion in GLSL.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •