Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 3 of 3

Thread: How to tell if a 3D point will be hidden

  1. #1
    Newbie Newbie
    Join Date
    Nov 2013
    Posts
    2

    How to tell if a 3D point will be hidden

    I want to place numbers next to vertices in a model only if that vertex is visible. I am using outline fonts and they will be hidden by the foreground images, but often part of the text gets clipped by adjacent objects. I have an image of this but don't see how to include it in this post. It seems i can only insert images based on URL's.


    So i thought if i know that a particular vertex is not hidden, i could disable the depth test so the text would always be viable.

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2012
    Location
    Australia
    Posts
    1,098
    I can think of one way but it seems a bit clumsy.

    1) render view
    2) render vertices as points with an occlusion query (ARB-occlusion-query )
    3) for those vertices not occluded render your text with depth disabled

  3. #3
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,124
    As always, there are several ways you can do this. tonyo_au's given you one. You can fetch the occlusion query results back, and use in an "if" expression in your app code to decide whether to draw the text. A incrementally more efficient approach is to use those same occlusion queries in combination with conditional rendering (glBeginConditionalRender()...glEndConditionalRend er()) to move the "if" check down into the GL driver, and possibly onto the GPU.

    And if you have a lot of vertices/texts to draw and you can preupload your text strings onto the GPU (in a texture, etc.), a potentially more efficient approach is to do your own occlusion checks on the GPU and generate your own list of "visible points" to render text for. Something like:

    1) render view into FBO (depth buffer = texture)
    2) Rebind the depth texture to an input sample of the pipeline
    3) Render all of your vertices with transform feedback:
    - Read depth buffer for vertex location.
    - If vertex not occluded
    - Stream out vertex ID to TF output buffer
    4) Bind vertex output buffer as input to pipeline (as texture buffer, etc.)
    5) Render your text strings for the generated vertex IDs

    The key here is whether #5 can be pushed totally to the GPU with your text labels implementation. If not, ignore that last suggestion.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •