Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: Discontinuous ray casting through 3D texture

  1. #1
    Junior Member Newbie
    Join Date
    Jun 2017
    Posts
    3

    Discontinuous ray casting through 3D texture

    Hi,

    I am doing volume rendering of a 3D texture using a basic ray casting algorithm.

    Here is how it works so far : the fragment shader iterates along a ray directed from the eye to the scene, taking samples from the 3D texture at each step. Frag color is generated based upon the data collected along the ray (for example, we may want to display the maximal intensity along the ray).

    As long as we just want to scan the whole 3D texture, that's easy : we send a cube of GL_TRIANGLES through the pipeline, and the vertex shader computes ray entrance location (or ray exit but we use face culling to discard back faces). Then ray exit can be computed easily in the fragment shader based on ray entrance.

    But I would like to display only a subset of the whole 3D texture, defined by some arbitrary bounding geometry. The idea is to be able to discard some parts of the 3D texture without having to change the texture itself. So, instead of sending a cube of vertices to the pipeline, one could send, let's say, a pyramid, and the iteration that takes place in the fragment shader would stay into the bounds of that pyramid.

    As long as the 3D texture subset is a convex polyhedron, it's fine : front and back faces locations can be first stored (as colors) in two separate FBOs, and then passed to the fragment shader as 2D textures.

    Where it gets tricky is when the geometry is not a convex polyhedron, because then a single ray can enter and exit the geometry several times :

    illustration : Click image for larger version. 

Name:	ray casting.jpg 
Views:	73 
Size:	19.4 KB 
ID:	2385

    So my question is : is there a convenient way in OpenGL to do such a "piecewise ray casting" ?

    There are too major issues I could not overcome :
    • How to compute the - possibly multiple - [ray entrance, ray exit] pairs using the GPU ?
    • How to transmit those pairs to the fragment shader so that, ideally, a single fragment shader execution can process a complete discontinuous ray ?

    And if none or those are possible, is there another approach I could try ?

  2. #2
    Member Regular Contributor
    Join Date
    May 2016
    Posts
    419
    Quote Originally Posted by MrGruk View Post
    There are too major issues I could not overcome :
    • How to compute the - possibly multiple - [ray entrance, ray exit] pairs using the GPU ?
    • How to transmit those pairs to the fragment shader so that, ideally, a single fragment shader execution can process a complete discontinuous ray ?
    you have to capture the "arbitrary geometry" first, this can be done with "transform feedback objects". after you've captured all triangles (into a buffer object on te GPU), check each of them for intersection with the ray in another shader execution, that way you can determine the points where the ray enters / exits the geometry. i've no clue how you can determine wheater a certain part of the ray is within / outside the geometry .. perhaps by sorting the intersection points after their depth and assume that "alternating" parts are ...

    outside [intersection1] within [intersection2] outside [intersection3] within .. etc

  3. #3
    Junior Member Newbie
    Join Date
    Jun 2017
    Posts
    3
    It took me quite a while, but thanks to your help I finally managed to make something that works. So thank you very much !

    It is far from perfect. There are still some issues I would like to fix (see below). But it gets the job done, so here are the main steps :
    • Put the bounding geometry in some VBO. It will be used as vertex shader input for both passes
    • 1st pass : storing GL coords of bounding geometry using a feedback buffer
      • Disable face culling an depth testing so every triangle can be processed
      • Enable GL_RASTERIZER_DISCARD in order to stop the pipeline before fragment shader execution
      • create an OpenGL buffer to store transform feedback
      • bind it using glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, bufferId);
      • in vertex shader, just transform vertices to GL coordinates
      • in geometry shader, compute the orientation of each triangle given the
        current camera direction
      • (there is no fragment shader)
      • execute shader program between glBeginTransformFeedback(GL_TRIANGLES) and
        glEndTransformFeedback() calls in order to write on feedback buffer instead of doing a regular rendering. Use bounding geometry as vertex shader input.
        In the feedback buffer object, store, for each triangle :
        • GL coordinates and corresponding 3D texture coordinates
        • triangle orientation (in GL coordinates, and considering current camera)

    • 2nd pass : actual rendering
      • Enable face culling an depth testing so that, for each ray, only the first entrance point is processed
      • Disable GL_RASTERIZER_DISCARD
      • attach the data of the feedback buffer to a texture buffer object (TBO) using glTexBuffer() so it can be used as array input by the fragment shader
      • in vertex shader, just transform vertices to GL coordinates
      • in fragment shader :
        • use "uniform samplerBuffer yourTboName;" to get access to the TBO
        • parse the triangle list contained in the TBO using texelFetch(), and make the list of intersection points between the ray and the triangles. I don't use perspective, so this is purely 2D calculations for me
        • sort that list based on z coordinates of intersection points (from near to far)
        • parse the list an perform the ray casting algorithm for each [ray entrance, ray exit] pair. To know if an intersection is ray entrance of ray exit, just read it from the TBO
        • check for consecutive ray entrance or consecutive ray exit points and make sure to handle those properly. It may happen quite often due to float imprecision
      • execute shader program with bounding geometry as input


    The clumsy part is the fragment shader of the 2nd pass. There are many issues :
    • Each fragment individually needs to parse all triangles, so performance will drop as the bounding geometry becomes more complex.
    • It feels so wrong to check for ray - triangles intersections in the fragment shader because - if my understanding of OpenGL pipeline is correct - OpenGL already computes those intersection points internally before running the fragment shader (or discarding fragments). And I assume it does it way faster than I do. Would there be a way to avoid recalculating those intersection points in the fragment shader ?
    • GLSL is not well-suited for making lists and sorting them. The only way I found to store intersection points is by using a static array in my fragment shader. That means that if there are more intersection points than there is room in the static array, it doesn't work. But if the static array size is too big (let's say 256), then the performance drops dramatically due to the only presence of the array (even if most of the array is actually never used - and of course, not involved in the sorting process). Is there a better way I could handle this in GLSL ?

  4. #4
    Member Regular Contributor
    Join Date
    May 2016
    Posts
    419
    regarding the first pass:
    -- you dont need to disable depth testing because it takes place after primitive assembly & rasterization

    regarding hte second pass:
    -- you can use a "shader storage buffer" instead of a "texture buffer" (if your system supports GL 4.3)

    regarding sorting intersection points by depth:
    -- you can do that with a "compute shader" for example, in a "1.5th pass", your input can be a shader storage buffer containing the intersection points (vec4), and an output (shader storage) buffer in which you write the sorted points. the problem with the "static array" size doesnt exist for shader storage buffers because you can query their size on runtime (besides reading / writing to these buffers)

    now is depends on what you want to do with these points (didnt understand fully what you're trying to render). calculations are usually better done on a "per-vertex"-level because there are very likely less vertices than fragments, so there are many fragment shader invocations.

    EDIT:
    ok, your fragment color is the sum of several points on the ray within the "arbitrary geometry". maybe you've heared about "order-independent transparency" (OIT), there is an example program for that in the book "OpenGL Programming Guide 8th Edition" (Chapter 11 "Memory"). it may help you with our problem ..

    how it works:
    for each pixel on screen you create a "linked list" of fragments belonging to this pixel. in a 2nd pass, you collect all the "hits" for each pixel, calculate the resultion fragment color and write it to the framebuffer.

    therefore you need:
    -- a "big buffer" containing ALL fragment hits
    -- a integer texture of the screen size
    ----> contains the location (array index) of the first hit at this pixel in "big buffer"
    -- a atomic counter buffer (for indexing)

    a "fragment hit" looks like this:
    --> vec4 fragment color
    --> vec4 fragment position / depth
    --> uint nexthit (array index)



    APPROACH #2:

    step 1: render "arbitrary geometry", capture triangles, discard fragments

    step 2: calculate ray x triangle intersection points, capture those into another buffer

    step 3: generate line segments that are within the "arbitrary geomatry"

    step 4: render line segments, subdivide these in the geometry shader to severyl point primitives, capture generate fragments into "big buffer" for hits

    step 5: for each screen pixel, blend the captured fragments together to the "final color"

    maybe the need for the "ray casting" dissapears with an OIT approach (?)
    Last edited by john_connor; 06-27-2017 at 09:04 AM.

  5. #5
    Junior Member Newbie
    Join Date
    Jun 2017
    Posts
    3
    regarding the first pass:
    -- you dont need to disable depth testing because it takes place after primitive assembly & rasterization
    Indeed. Thanks for pointing that out.

    ok, your fragment color is the sum of several points on the ray within the "arbitrary geometry".
    The way ray points are processed depends on the volume rendering technique used, so it is not always as simple as a sum. For example, I want to be able to :
    • find the maximal intensity point along the ray (my 3D texture is monochromatic), and use it as fragment color. This makes a basic volume rendering, like here (I already do this kind of rendering, but now I want to be able to skip some parts of the 3D texture)
    • use additional rendering tricks, such as adding some intensity attenuation as the ray goes deeper into the 3D texture
    • find the nearest point that has its intensity over a given threshold, in order to make surface rendering
    • etc


    maybe you've heared about "order-independent transparency" (OIT), there is an example program for that in the book "OpenGL Programming Guide 8th Edition" (Chapter 11 "Memory"). it may help you with our problem ..

    how it works:
    for each pixel on screen you create a "linked list" of fragments belonging to this pixel. in a 2nd pass, you collect all the "hits" for each pixel, calculate the resultion fragment color and write it to the framebuffer.

    therefore you need:
    -- a "big buffer" containing ALL fragment hits
    -- a integer texture of the screen size
    ----> contains the location (array index) of the first hit at this pixel in "big buffer"
    -- a atomic counter buffer (for indexing)

    a "fragment hit" looks like this:
    --> vec4 fragment color
    --> vec4 fragment position / depth
    --> uint nexthit (array index)
    This looks extremely promising. I think this is exactly what I was looking for. Thanks a lot ! I will definitely give it a try and provide some feedback.

    APPROACH #2:

    step 1: render "arbitrary geometry", capture triangles, discard fragments

    step 2: calculate ray x triangle intersection points, capture those into another buffer

    step 3: generate line segments that are within the "arbitrary geomatry"

    step 4: render line segments, subdivide these in the geometry shader to severyl point primitives, capture generate fragments into "big buffer" for hits

    step 5: for each screen pixel, blend the captured fragments together to the "final color"

    maybe the need for the "ray casting" dissapears with an OIT approach (?)
    This sounds great too, but I need to keep fine control of what I do with the points sampled along the ray, and I'm afraid blending will not provide enough flexibility. But the geometry shader trick is very interesting. I will probably give it a try too, to see how performance compares with the "ray casting loop inside fragment shader" version.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •