Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 13

Thread: Occlusion Culling by the Cryengine way: understanding

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Intern Newbie
    Join Date
    Nov 2017
    Posts
    30

    Question Occlusion Culling by the Cryengine way: understanding

    Hi All.

    Nice to start new thread in such warm forum.

    I have problems with low FPS in my little project.
    And it requires to some optimizations.
    When I tried to investigate 'Frustum Culling' algorithm I met the more interesting one: https://www.gamedev.net/articles/pro...chnique-r4103/

    Main idea of method is:
    1 - Get depth buffer from previous frame
    2 - Make reprojection to current frame
    3 - Softwarely rasterize BBoxes of objects to check, whether they can be seen from camera perspective or not - and, based on this test, make a decision to draw or not to draw.

    I have some questions related this way.

    1 - How can new objects be rendered if we always using depth buffer from previous frame?
    2 - How can I relate rasterized C Buffer with all my mesh objects? I mean: How can I accept or reject current object for rendering using C Buffer?

    Thank you for answers.
    Last edited by nimelord; 12-20-2017 at 12:35 PM.

  2. #2
    Member Regular Contributor
    Join Date
    Jul 2012
    Posts
    458
    1 - The key idea is that the view changes very few between two frames. There will definitely have artifacts. But they are 'acceptable'.
    2 - Have a look at OpenGL occlusion queries.

  3. #3
    Intern Newbie
    Join Date
    Nov 2017
    Posts
    30
    Silence, thank you!

  4. #4
    Super Moderator OpenGL Guru dorbie's Avatar
    Join Date
    Jul 2000
    Location
    Bay Area, CA, USA
    Posts
    3,967
    Reprojection of the old buffer to the new would require expensive fragment level reprojection of depth and would produce anomalies at silhouettes that you would have to treat conservatively. Of course you can render new stuff. Understand that your occlusion would have to be conservative in any scheme. New stuff might be hidden or visible. The real challenge is what do you do when deleting old stuff? Figure its bound box and cut a hole in your occlusion buffer?

    The real problem here is the work involved to perform these tests and many schemes might take more performance than it saves.

    Simple conservative systems like minimally inclusive bounds, portals etc can be useful tools to implement culling.

    Also with performance issues an advanced occlusion culling scheme might not be your best first port of call.

    To answer your specific question how to accept or reject an object the answer is simple. Objects are drawn by default. Only if a conservative test proves it is not visible can you cull it. This typically involves bounds conservative testing against an occlusion buffer/structure. Conservative in this instance means erring on the side of visibility particularly in z both when generating the occlusion buffer/structure and when testing the bounds of the tested objects.

  5. #5
    Intern Newbie
    Join Date
    Nov 2017
    Posts
    30
    Thank you, Dorbie!

    I already implemented frustum culling for scene and for all shadow maps (I mean CSM and shadow mapping for point lights and spot lights).
    It speeds up rendering from 35 FPS to 120 FPS. Very good result I think.

    And now I want to start of implementation of "Coverage Buffer".

    Will do it step by step.

    I have several technical questions for 1st step: 'get the depth buffer'.

    1) How can I get depth buffer from previous frame from GPU to CPU?
    2) Is there way to downscale resolution depth buffer on GPU before get it to CPU side?

  6. #6
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,595
    Quote Originally Posted by nimelord View Post
    1) How can I get depth buffer from previous frame from GPU to CPU?
    1. glBindBuffer(GL_PIXEL_PACK_BUFFER)
    2. glReadPixels() or glGetTexImage()
    3. Wait
    4. glMapBuffer() or glGetBufferSubData()


    Step 2 copies the data from the framebuffer (or the texture attached to the framebuffer) to GPU memory, step 4 copies the data from GPU memory to CPU memory. Decoupling these steps avoids stalling the CPU until the commands which generate the frame have completed.

    Quote Originally Posted by nimelord View Post
    2) Is there way to downscale resolution depth buffer on GPU before get it to CPU side?
    Render a quad with the source depth buffer bound to a texture image unit and the destination depth buffer attached to the framebuffer.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •