Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 19

Thread: Ray intersection with GLSL

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Junior Member Newbie
    Join Date
    Dec 2017
    Posts
    26

    Ray intersection with GLSL

    I'm looking for to simulate the reflection effect using ray tracing on GLSL, however I could not find good references, examples or tutorial related to this topic. When I got some interesting data, the method is limited for specific object's surfaces (e.g. sphere, box, cylinder...) and it needs to know object's position and radius; it is not my case. I also know the GLSL does not support recursive functions, but as far as I know the ray tracing can be done iteratively.

    My goal is to simulate the reverberation process for an acoustic sensor as follows: primary reflections by rasterization; and secondary reflections by ray tracing. When a ray hits the object's surface, the distance and normal values are measured.

    My scene with all objects is modeled using OpenSceneGraph, and I have used GLSL to handle the normal and position data from the viewpoint on GPU.

    Follows below my current GLSL code. At this moment, I am able to calculate the ray parameters (world position and direction vector values, for each pixel), however I do not know how to calculate the data when a ray hits a surface.

    Thanks in advance. Any help is very much welcome.

    Vertex shader:

    Code :
    #version 130
     
    uniform mat4 osg_ViewMatrixInverse;
     
    out vec3 positionEyeSpace;
    out vec3 normalEyeSpace;
    uniform vec3 cameraPos;
     
    // ray definition, with an origin point and a direction vector
    struct Ray {
        vec3 origin;
        vec3 direction;
    };
     
    void main() {
        gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
     
        // world space
        mat4 modelWorld = osg_ViewMatrixInverse * gl_ModelViewMatrix;
        vec3 positionWorldSpace = vec3(modelWorld * gl_Vertex);
        vec3 normalWorldSpace = mat3(modelWorld) * gl_Normal;
     
        // eye space
        positionEyeSpace = vec3(gl_ModelViewMatrix * gl_Vertex);
        normalEyeSpace = gl_NormalMatrix * gl_Normal;
     
        // calculate the reflection direction for an incident vector
        vec3 I = normalize(positionWorldSpace - cameraPos);
        vec3 N = normalize(normalWorldSpace);
        vec3 reflectedDirection = normalize(reflect(I, N));
    }

    Fragment shader:

    Code :
    #version 130
     
    in vec3 positionEyeSpace;
    in vec3 normalEyeSpace;
     
    uniform float farPlane;
    uniform bool drawNormal;
    uniform bool drawDepth;
     
    out vec4 out_data;
     
    void main() {
        vec3 nNormalEyeSpace = normalize(normalEyeSpace);
     
        vec3 nPositionEyeSpace = normalize(-positionEyeSpace);
     
        float linearDepth = sqrt(positionEyeSpace.x * positionEyeSpace.x +
                                 positionEyeSpace.y * positionEyeSpace.y +
                                 positionEyeSpace.z * positionEyeSpace.z);
     
        linearDepth = linearDepth / farPlane;
     
        // output the normal and depth data as matrix
        out_data = vec4(0, 0, 0, 1);
        if (linearDepth <= 1) {
            if (drawNormal) out_data.z = abs(dot(nPositionEyeSpace, nNormalEyeSpace));
            if (drawDepth)  out_data.y = linearDepth;
        }
     
        gl_FragDepth = linearDepth;
    }

  2. #2
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,925
    The first surface is easy; as you note, you can just rasterise the surface, calculating the incident vector by subtracting the eye position from the fragment position, then the reflected vector from the incident vector and surface normal. At that point, you have a ray defined by starting position and direction.

    That's where it gets complicated: given a ray, you need to find the first surface which that ray intersects. This problem is central to any form of raytracing, whether on the CPU or GPU. If the number of surfaces is small, you can just test each one. Otherwise, you need some form of spatial index (3D array, octree, BSP tree, bounding volume hierarchy etc). Most techniques can be converted to use from GLSL, provided that they don't require specific libraries.

    So I would suggest just looking at articles on raytracing in general, finding one which fits your particular scene structure and use case, then figuring out how to implement that in GLSL.

  3. #3
    Junior Member Newbie
    Join Date
    Dec 2017
    Posts
    26
    Hi GClements,

    thanks for the quick reply. I agree the first surface is easy by rasterization, and the ray properties can be calculated on reflected surface (starting position and direction). My idea is to simulate a single ray for each reflected point, this approach will save computational time. A priori, I have no information about the objects that compose the 3D scene (position, shape...).

    This spatial index of each object present in the scene is a good suggestion. I will also look in this direction.

    By the way, I still have a doubt: If I use ray tracing for the first surface, and I know a priori the objects' positions, how can I calculate the intersection between the ray and any object surface?

    Thanks a lot.
    Last edited by romulogcerqueira; 02-28-2018 at 03:27 PM.

  4. #4
    Junior Member Newbie
    Join Date
    Dec 2017
    Posts
    26
    GClements,

    do you have any example with RayTracing and GLSL?

  5. #5
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,925
    Quote Originally Posted by romulogcerqueira View Post
    By the way, I still have a doubt: If I use ray tracing for the first surface, and I know a priori the objects' positions, how can I calculate the intersection between the ray and any object surface?
    That depends upon the nature of the surface. For a triangle, you'd treat the vertex positions as a 3x3 matrix which transforms barycentric coordinates to spatial coordinates:
    Code :
    [x]   [x1 x2 x3] [a]
    [y] = [y1 y2 y3].[b]
    [z]   [z1 z2 z3] [c]
    Inverting the matrix gives a matrix M which transforms spatial coordinates to barycentric coordinates.
    Code :
    [a]     [x]
    [b] = M.[y]
    [c]     [z]
    Note that this only needs to be done once, when the geometry is defined, not every frame.

    For each ray p=s+t*d (in world space), transform the start position and direction of the ray into barycentric coordinates:
    b=M.s+t*M.d

    Finding the intersection with the triangle's plane is just solving a+b+c=1 for t, i.e. u+t*v=1 => t=(1-u)/v where u and v are just the sums of the coordinates of M.s and M.d respectively. Note that if v=0 then the ray is parallel to the plane of the triangle and there is no intersection. Substituting t back into [a b c]T=M.s+t*M.d gives the barycentric coordinates of the intersection point; if all three are positive, the intersection point lies inside the triangle, otherwise it's outside.

    Quote Originally Posted by romulogcerqueira View Post
    do you have any example with RayTracing and GLSL?
    Only this example of ray-sphere intersection. But it's only the first surface; it doesn't handle reflection. And it's only one object; the spatial index (so you aren't testing every ray against every surface) tends to be the hard part.

  6. #6
    Junior Member Newbie
    Join Date
    Dec 2017
    Posts
    26
    Hi GClements,

    I'm going to implement this way to store all objects as triangles and pass this information to shader. I will update here with the progresses.

  7. #7
    Junior Member Newbie
    Join Date
    Dec 2017
    Posts
    26
    Hi GClements,

    I am able to store all vertices of 3D models as triangles and pass them to shader as texture, and then perform the ray-triangle intersection using classic algorithms (e.g. Möller–Trumbore ray-triangle intersection).

    My idea was to distribute these triangles into a k-d tree, and I got excellent query performance results using C++ (it is a really good approach). However, my problem comes when I tried to pass the Nearest Neighbor Searching to GLSL, once shaders doesn't allow recursive calls. Do you know how can I figure out this point?

    My current code in C++ is:

    Code :
    // Calculate the nearest node for a specific value (using node)
    void nearest(KDnode *root, KDnode *nd, int i, int dim, KDnode **best, double *best_dist, int *visited)
    {
        if (!root)
            return;
     
        double d = dist(root, nd);
        double dx = root->data[3][i] - nd->data[3][i];
        double dx2 = dx * dx;
     
        (*visited)++;
     
        if (!*best || d < *best_dist)
        {
            *best_dist = d;
            *best = root;
        }
     
        // If chance of exact match is high
        if (!*best_dist)
            return;
     
        i = ++i % dim;
     
        nearest(dx > 0 ? root->left : root->right, nd, i, dim, best, best_dist, visited);
     
        if (dx2 < *best_dist) {
            nearest1(dx > 0 ? root->right : root->left, nd, i, dim, best, best_dist, visited);
        }
    }

    Thanks in advance,

    Rômulo.
    Last edited by romulogcerqueira; 07-23-2018 at 06:28 PM.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •