PDA

View Full Version : Ray Cast / Ray Tracing



Asmodeus
07-13-2015, 06:15 AM
Hello, I would like to ask some theoretical Questions here , combined with some pseudo code to explain what i need or mean. So recently i implemented traditional ray casting from the camera's position into the scene. I will devide my post in 2 parts:

PART ONE:
Ray Unprojecting from far to near plane


float mouse_x = (float)InputState::getMOUSE_X();
float mouse_y = WINDOW_HEIGHT - (float)InputState::getMOUSE_Y();
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, WINDOW_WIDTH, WINDOW_HEIGHT);
this->ray_start = glm::unProject(glm::vec3(mouse_x, mouse_y, 0.0f), camera->getViewMatrix(), Projection, viewport);
this->ray_end = glm::unProject(glm::vec3(mouse_x, mouse_y, 1.0f), camera->getViewMatrix(), Projection, viewport);


Then i implemented a simple function to detect if the ray has collided with a bounding sphere.


glm::vec3 vSphereCenter = Sphere->getCenter();
glm::vec3 vA = this->ray_start;
glm::vec3 vB = this->ray_end;
float fSphereRadius = Sphere->getRadius();

glm::vec3 vDirToSphere = vSphereCenter - vA;
glm::vec3 vLineDir = glm::normalize(vB - vA);
float fLineLength = glm::distance(vA, vB);
float t = glm::dot(vDirToSphere, vLineDir);
glm::vec3 vClosestPoint;

if (t <= 0.0f)
vClosestPoint = vA;
else if (t >= fLineLength)
vClosestPoint = vB;
else
vClosestPoint = vA + vLineDir*t;

return glm::distance(vSphereCenter, vClosestPoint) <= fSphereRadius;


So far so good. The code above works. Here starts my broken "theory" / "questions". Let's say that we have a scene that consists of models. All models have some auto-generated bounding spheres. So we want to check whitch object the player has "clicked" on , or the object the ray intersects with.First when i was thinking about this. I didnt think that several models can be in a line (in each axis), thats why you should loop over all the models that have been hit and find the closest to the camera. Look at the pseudo code mix below, added some comments as well.


//Here i present you the Pseudo C++ mix :D
std::vector<models> scene_models; //All Models in the scene
std::vector<models> selected_models; //All models that have been hit by the ray

//Loop over every model find where the ray intersects
for(i,scene_models.size())
{
if(RayIntersectsWith(model[i]->getBoundSphere())
{
selected_models.pushback(model[i])
}
}
//Loop over all intersected model and find the one closest to the camera
for(i,selected_models.size())
{
//get the distance(camera->pos,selected_models[i]->getBoundSphere()->getCenter());
//Find the one closest to the camera = closest_model_to_the_camera;
}
//Do something with closest_model_to_the_camera

My Question is as follows: Is this the right way to go ? Would it work ? Is there a more efficient way to apply this ?


PART TWO
This part is not connected with the part one. Here i wanted to ask how would one implement terrain responsive ray. A ray that can return the world coords on a 3D Terrain. Currently i have something that works , but not that well. At the moment i am using the ray and performing a binary search to find the point on the terrain


//What i mean by binary search:
Take the starting point of the ray. Set Ray Lenght.
1. Take the middle point of the ray.
2. Determine if that point is above of below the terrain
3. if the point is below the terrain take the upper half / else take the lower half of the ray
4. Repeat N times steps 1-4. (Recursivly)
//This is not very accurate nor efficient way but does find the exact world position on the terrain, the more times(the bigger the N) you repeat the process the more accurate the result is.


. And it does work , the only problem is that i have to use big camera angle (~near 90 degrees). The terrain i am using is generated from a height map.
My question is: Is there some stable, reliable way to return the world coordinates on terrain from the ray.

CodcULaval
07-13-2015, 07:42 AM
//Here i present you the Pseudo C++ mix :D
std::vector<models> scene_models; //All Models in the scene
std::vector<models> selected_models; //All models that have been hit by the ray

//Loop over every model find where the ray intersects
for(i,scene_models.size())
{
if(RayIntersectsWith(model[i]->getBoundSphere())
{
selected_models.pushback(model[i])
}
}
//Loop over all intersected model and find the one closest to the camera
for(i,selected_models.size())
{
//get the distance(camera->pos,selected_models[i]->getBoundSphere()->getCenter());
//Find the one closest to the camera = closest_model_to_the_camera;
}
//Do something with closest_model_to_the_camera

My Question is as follows: Is this the right way to go ? Would it work ? Is there a more efficient way to apply this ?

I'm not entirely sure if it would be more efficient in your case, but couldn't you create a 2D array (full of 0's) representing what the camera is looking at and put a unique index at the locations where there's supposed to be a model? The index could then be used in a hash/lookup table to find a pointer directing to the model(s). That way, all you'd have to do is to find the 2D position of the mouse, transform that position into indices, look up if there's something in the array at those indices and run your 'closest to camera' function only on the models represented by the index. In cases where you have loads of model, you'd at least save the time you'd waste iterating on non-useful models.

Asmodeus
07-13-2015, 08:58 AM
Yea its possible. Also keep in mind that the vector<ofmodels> wont represent all models in the world , only those closest to the camer / or those that will be visible by it. So they wont be that much. Also alot of the scene is usually static. I was just asking if my approach is correct ? Am i on the right path ?

CodcULaval
07-13-2015, 11:11 AM
Yea its possible. Also keep in mind that the vector<ofmodels> wont represent all models in the world , only those closest to the camer / or those that will be visible by it. So they wont be that much. Also alot of the scene is usually static. I was just asking if my approach is correct ? Am i on the right path ?

If you only need to know if it works, then yes, I do believe that your method would work.

Asmodeus
07-13-2015, 12:12 PM
One question tho', i am having not very good time calculating the bounding sphere. Here is the sample snippet, the sphere does seem correct but i dont think that the center is calculated correclty. For example with a simple tree the sphere tends to be around the stem of the tree, same goes for a human-type mesh the sphere's center is around the feet/knees i would expect it to be above the waist


void BoundingSphere::calculateBoundingSphere(std::vecto r<glm::vec3> vertices)
{
vec3 _center = vec3(0.0f, 0.0f, 0.0f);
for (int i = 0; i<vertices.size(); i++) {
_center += vertices[i];
}
_center /= (float)vertices.size();

float _radius = 0.0f;
for (int i = 0; i<vertices.size(); i++) {
vec3 v = vertices[i] - _center;
float distSq = glm::length(v);
if (distSq > _radius)
_radius = distSq;
}
_radius = sqrtf(_radius);
this->center = _center;
this->radius = _radius;
}


EDIT: Also having mostly uneven meshes i should probably move to OBB or AABB

GClements
07-13-2015, 02:02 PM
This part is not connected with the part one. Here i wanted to ask how would one implement terrain responsive ray. A ray that can return the world coords on a 3D Terrain. Currently i have something that works , but not that well. At the moment i am using the ray and performing a binary search to find the point on the terrain
Unless I'm misunderstanding your approach, it's incorrrect. If the ray intersects the terrain in multiple locations, it won't necessarily find the intersection which is closest to the viewpoint.

You can't do this by divide-and-conquer unless you have quadtrees (i.e. mipmaps) containing the minimum and maximum heights for each node. Even then, you may need to recurse into the nearest node first before determining that there's no collision there then recursing into the farther nodes. IOW, in the worst case (for a ray tangential to the surface) it's O(n) not O(log(n)).


My question is: Is there some stable, reliable way to return the world coordinates on terrain from the ray.
You know (or presumably can easily find out) the start and end points of the ray in world coordinates, which can be interpolated to find any point on the ray.

GClements
07-13-2015, 02:11 PM
One question tho', i am having not very good time calculating the bounding sphere. Here is the sample snippet, the sphere does seem correct but i dont think that the center is calculated correclty. For example with a simple tree the sphere tends to be around the stem of the tree, same goes for a human-type mesh the sphere's center is around the feet/knees i would expect it to be above the waist

Your calculation will find a point which is biased toward whichever side of the model has more vertices. There are various algorithms for finding the minimum bounding sphere or a close approximation to it (Link (https://en.wikipedia.org/wiki/Bounding_sphere#Algorithms)), but they're significantly more complex. A simple alternative is to use the midpoint of the bounding box.

From your descriptions of the behaviour with a human and a tree, it sounds as if the vertical component is inverted. Are you neglecting to apply any relevant transformations to the centroid?

Asmodeus
07-13-2015, 02:33 PM
Well i apply scaling to the sphere and transform its position if thats what you mean. While i was waiting for new responses here i managed to create simple AABB generation for meshes and then found a very useful (custom) algorithm that finds intersection between ray and obb. The cool thing is that you can pass a AABB + ModelMatrix and the function that checks the intersection will convert it into an OBB , then check for intersection and return the value. Its handy and pretty accurate. One thing i am missing is that i found out that if i pass only Tranf*Rotation (its all working smooth and fine, pixel perfect detection) but if i pass as model matrix Transf*Rotation*Scaling , the intersection is detected as if the obb is downscaled instead of the opposite.
Also forgot to mention that i scale the aabb max and min with the scaling factor of the mesh. (i assume this is correct since if i dont do this the box's size is with the mesh's base one)

GClements
07-13-2015, 02:50 PM
One thing i am missing is that i found out that if i pass only Tranf*Rotation (its all working smooth and fine, pixel perfect detection) but if i pass as model matrix Transf*Rotation*Scaling , the intersection is detected as if the obb is downscaled instead of the opposite.
If you're transforming the ray into the coordinate system of the AABB, you need to use the inverse matrix.

If you're composing the matrix from primitive transformations, you can avoid a generalised matrix inverse by using (A*B)-1=B-1*A-1 and inverting the individual transformations. Rotation can be inverted by negating the angle, translation by negating the vector, scaling by using the reciprocal of the scale factors.

Asmodeus
07-13-2015, 03:05 PM
As far as i can see the function does not take in account the scaling. Just a sample snippet from the code. As far as i can see the snippet below shows extrapolation of position and rotation from the matrix.


glm::vec3 OBBposition_worldspace(ModelMatrix[3].x, ModelMatrix[3].y, ModelMatrix[3].z);
glm::vec3 delta = OBBposition_worldspace - ray_origin;
{

glm::vec3 xaxis(ModelMatrix[0].x, ModelMatrix[0].y, ModelMatrix[0].z);
float e = glm::dot(xaxis, delta);
float f = glm::dot(ray_direction, xaxis);

//Where below calculations are processed for axis y,z


About my Part 2 question for the terrain , could you elaborate ?

"You know (or presumably can easily find out) the start and end points of the ray in world coordinates, which can be interpolated to find any point on the ray."
I have the start and end points of the ray. But what do you mean by interpolating ?

GClements
07-13-2015, 06:41 PM
As far as i can see the function does not take in account the scaling. Just a sample snippet from the code. As far as i can see the snippet below shows extrapolation of position and rotation from the matrix.
Any scale factor will be reflected in the magnitudes of e and f (the dot product of two vectors is equal to the product of their magnitudes multiplied by the cosine of the angle between them).

If the code is written on the assumption that the axes are unit-length vectors, scaling will interfere with that. Or the scale factors may end up cancelling out.



About my Part 2 question for the terrain , could you elaborate ?

"You know (or presumably can easily find out) the start and end points of the ray in world coordinates, which can be interpolated to find any point on the ray."
I have the start and end points of the ray. But what do you mean by interpolating ?

For instance, when you find the midpoint, if you have the start and end points in world coordinates, then the average of those two values will be the midpoint in world coordinates.

Asmodeus
07-14-2015, 04:15 AM
Well is there any nice terrain-ray intersection algorithm available out there ? Since mine is just for testing and not accurate enought.
I will probably use both sphere and box intersections. So i will need a bit "loose" sphere around the mesh and if the ray intersects i will pass to the more accurate box intersection. That should save some significant time when testing more objects.

EDIT: I was wondering since i have generated a AABB , how could i convert that to OBB (this is done in the intersection check function , as i pass the matrices , but i was wondering how would it look like outside of that)

GClements
07-14-2015, 07:05 AM
Well is there any nice terrain-ray intersection algorithm available out there ? Since mine is just for testing and not accurate enought.
Project the ray onto the ground plane and trace the resulting line through the height map (as if you were drawing a line on a bitmap surface, e.g. Bresenham's algorithm or DDA) starting at the edge nearest the viewpoint.

If you know the minimum and maximum height values, find the intersection between the ray and the corresponding planes. Then you only need to trace a line between those two points rather than across the entire height map.

If you want better average-case performance at the expense of complexity, generate two mipmaps for the minimum and maximum heights. You effectively then have a series of successively finer grids of AABBs bounding the terrain. You only need to examine the higher-resolution data when the lower-resolution data indicates a potential intersection.

Asmodeus
07-14-2015, 09:31 AM
I will leave out the terrain, for now cause i have something thats working to some extend. But i was looking over and coudn't find any meaningful tutorial on how to generate OBBs

Asmodeus
07-15-2015, 06:10 AM
As for my last post i am still searching a nice OBB explanation. How its done more in-depth information. Can it be generated from a AABB. If possible some samples . Thanks

GClements
07-15-2015, 06:46 AM
Well, an OBB (oriented bounding box) is just a bounding box which isn't necessarily axis-aligned.

An OBB is typically an AABB plus a transformation. Testing for intersection with a ray can be performed by transforming the ray by the inverse transformation then using an AABB test.

Both of these are special cases of a convex hull. The specialisation just allows some of the calculations to be simplified. E.g. for an AABB, the calculation of the dot product with a face normal involves extracting one of the three coordinates (in essence, you're multiplying by values which are zero or one, then adding three values of which two are known to be zero).

The convex hull of a set of points can be found deterministically, using e.g. Quickhull (https://en.wikipedia.org/wiki/Quickhull) or other methods (https://en.wikipedia.org/wiki/Convex_hull_algorithms). This can be used as the basis for finding an OBB or bounding sphere (you only need to consider the vertices of the convex hull rather than all vertices).

Asmodeus
07-15-2015, 07:22 AM
Aham , but wait. So this is what i've got so far (from reading around). AABB is axil aligned , you can apply transformations or scaling, same applies for OBB but you can add Rotations ? My question was more of how to actually calculate one. I found various examples . Simple(on first glance) Code snippet i found. Once i started tracing the methods included in this constructor sh*t got real. The Covariance calc here invokes server other complex methods


OBB::Set( const Vector3* points, unsigned int nPoints )
{
// //ASSERT( points );

Vector3 centroid;

// compute covariance matrix
Matrix33 C;
ComputeCovarianceMatrix( C, centroid, points, nPoints );

// get basis vectors
Vector3 basis[3];
GetRealSymmetricEigenvectors( basis[0], basis[1], basis[2], C );
mRotation.SetColumns( basis[0], basis[1], basis[2] );

Vector3 min(FLT_MAX, FLT_MAX, FLT_MAX);
Vector3 max(FLT_MIN, FLT_MIN, FLT_MIN);

// compute min, max projections of box on axes
// for each point do
unsigned int i;
for ( i = 0; i < nPoints; ++i )
{
Vector3 diff = points[i]-centroid;
for (int j = 0; j < 3; ++j)
{
float length = diff.Dot(basis[j]);
if (length > max[j])
{
max[j] = length;
}
else if (length < min[j])
{
min[j] = length;
}
}
}

// compute center, extents
mCenter = centroid;
for ( i = 0; i < 3; ++i )
{
mCenter += 0.5f*(min[i]+max[i])*basis[i];
mExtents[i] = 0.5f*(max[i]-min[i]);
}

} // End of OBB::Set()



Then i find another example on stackoverflow : http://stackoverflow.com/questions/26530219/obb-rotation-calculation. Here the OP uses completely different approach to construct the box.
Just the problem is i cant find ONE definitive way of doing this.

GClements
07-15-2015, 11:07 AM
Just the problem is i cant find ONE definitive way of doing this.
The reason for that is that computing an optimal OBB (for some definition of "optimal"; there's more than one) would be insanely slow for any non-trivial mesh, so you're left with a choice between various algorithms, some of which are closer to optimal, some are faster, some are simpler.

OTOH, finding the minimum AABB is trivial. While finding the convex hull is more complex, you at least have the advantage that there's only one correct answer; the different algorithms just strike a different balance between performance and simplicity.

BTW, the algorithm you quote appears to be principal component analysis (PCA) (https://en.wikipedia.org/wiki/Principal_component_analysis), which could equally be applied to finding an oriented bounding ellipsoid. Roughly speaking, It's a multi-dimensional equivalent of least-squares line fitting.

Asmodeus
07-15-2015, 11:59 AM
Which method would you recommend. And if possible to apply some examples. I myself am new to this, not entirely sure if i understand the complete math involved but i started to clear some things up. Maybe i will better off with a simpler method, just for starters. Thanks :)

GClements
07-15-2015, 10:22 PM
Which method would you recommend.
Whichever one works best for your data.

E.g. PCA is normally used with data which follows something approximating a Gaussian distribution. The vertex coordinates of a typical mesh are probably a long way from that, so I'm not sure how well it will work in practice.

A fairly simple option for OBBs is to just test multiple orientations and use whichever one produces the smallest box. You don't need to try very many to find an orientation that's not too far off (or at least avoid the worst case, where all of your vertices are in a line which happens to be the box's diagonal).

Asmodeus
07-18-2015, 07:23 AM
Huh , okay i have implemented the OBB / AABB interaction etc. Even had time and imported Bullet Physics into my engine and it's very neat and cool. I am now back at the problem of how (generaly speaking) would one register a hit from a ray on a mesh in world coordinates . For example shoot a ray at a mesh and determine the world coordinates on that mesh where the ray intersects with it (the mesh).
Bullet has RayTest (which i am using , but i am not aware if Bullet has something similar to what i want , probably not cause it is not directly connected to physics but more to the graphics part)
Thanks !

GClements
07-19-2015, 04:41 AM
I am now back at the problem of how (generaly speaking) would one register a hit from a ray on a mesh in world coordinates . For example shoot a ray at a mesh and determine the world coordinates on that mesh where the ray intersects with it (the mesh).
Option #1: Transform the ray from eye space to object space. Calculate the intersection in object space. Transform the intersection point back to world space.
Option #2: Transform the ray from eye space to world space. Transform it from world space to object space. Calculate the interpolant of the intersection. Rather than using it to interpolate the ends of the ray in object space (yielding the intersection point in object space), use it to interpolate the ends of the ray in world space (yielding the intersection point in world space).

Or is the issue that you don't know how to determine the intersection in any coordinate system? Calculating the intersection of a mesh with a ray boils down to calculating the intersection of each triangle with the ray and taking the closest intersection. You would typically use some kind of spatial index (e.g. bounding-box hierarchy, octree, etc) to avoid testing each triangle individually. In the case of a height-map, you'd use the fact that the vertices lie on a regular grid to optimise the process.

Asmodeus
07-19-2015, 05:04 AM
Very informative thanks, well i tried using the glm::raytriangle intersect function. Which returns bary centryc coordinates. Maybe my mistake was that i wasn't performing the test in the same space. The ray was in eye space and the vertices were in object space. As far as i know i can get the desired position from the bary's coords with something like:
vec3 result = bary.x*v1 + bary.y*v2 + bary.z*v3 , where v1,v2,v3 are the triangles vertices ? I may be totally wrong tho

EDIT: I am using glm::Unproject , so i believe that the ray must already be in world space

EDIT2: Sample Code just for testing



glm::vec3 out_start = Rayz->ray_start;
glm::vec3 out_end = Rayz->ray_end;
glm::vec3 out_direction = normalize(out_end - out_start);
glm::vec3 result;

for (int i = 0; i < Terrain1->getTerrainData()->getIndices().size(); i += 3)
{
v1 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 0]].position;
v2 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 1]].position;
v3 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 2]].position;

if (glm::intersectRayTriangle(out_start, out_direction, v1, v2, v3, result)){

glm::vec3 fresult = result.x*v1 + result.y*v2 + result.z*v3;
cout << fresult.x << " " << fresult.y << " " << fresult.z << endl;
//cout << result.x + result.y + result.z << endl;

}

}

GClements
07-19-2015, 08:57 AM
As far as i know i can get the desired position from the bary's coords with something like:
vec3 result = bary.x*v1 + bary.y*v2 + bary.z*v3 , where v1,v2,v3 are the triangles vertices ?

Yes. The result will be in the same coordinate system as v1,v2,v3.



EDIT: I am using glm::Unproject , so i believe that the ray must already be in world space

It will be in object space. Or more accurately, whatever space proj*model transforms from.

Asmodeus
07-19-2015, 09:03 AM
Yea, well that is what i am using for the ray cast. I am passing the view and projection matrices to the unproject , this means that the result ray will be in ViewProj space ? Is that what you mean ?


float mouse_x = (float)InputState::getMOUSE_X();
float mouse_y = WINDOW_HEIGHT - (float)InputState::getMOUSE_Y();
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, WINDOW_WIDTH, WINDOW_HEIGHT);
this->ray_start = glm::unProject(glm::vec3(mouse_x, mouse_y, 0.0f), camera->getViewMatrix(), Projection, viewport);
this->ray_end = glm::unProject(glm::vec3(mouse_x, mouse_y, 1.0f), camera->getViewMatrix(), Projection, viewport);


Worth Mentioning that if i use the code i posted in my previous post i get somewhat correct coordinates. But not usable, obviously i am missing something

GClements
07-19-2015, 09:31 AM
Yea, well that is what i am using for the ray cast. I am passing the view and projection matrices to the unproject , this means that the result ray will be in ViewProj space ? Is that what you mean ?

The transformation pipeline conventionally looks like:

Object coordinates
[model-view matrix]
Eye coordinates
[projection matrix]
Clip coordinates
[homogeneous normalisation]
Normalised device coordinates.
[viewport transformation]
Window coordinates

glm::unProject reverses the process, resulting in object coordinates. Whichever space the combination of the supplied model-view and projection matrices transforms from, glm::unProject will transform to.

If you're dealing with mouse input, bear in mind that mouse coordinates normally have the origin in the top-left corner with Y increasing downward, while OpenGL window coordinates have the origin in the bottom-left corner with Y increasing upward.

Asmodeus
07-19-2015, 09:42 AM
Then the code should be working. The terrain vertices are in object space as is the ray. So the code should be working fine , still i am getting weird results. Close but not exact

Asmodeus
07-21-2015, 04:46 AM
I found that the function from glm , raytriangle intersection that returns barrycentric coords does not always return correct results, maybe i am feeding it wrong information but AFAIK the sum of barry's coords should always be = 1.0f. Well the function does return correct results MOST of the time. Also it's strange because the code works 60-70% of the time , the return value is correct, so what's going on here. The value of fresult from the code below is returned in object space , since i am not transforming the terrain's vertices i havent transformed it to world.
The strange thing is that the fresult value is mostly correct but sometimes returns coords that are offset of the correct position with +/-20.0f - 30.0f in all tree axis (sometimes at the same time)


if (InputState::getMOUSE_LEFT() == 1)
{

glm::vec3 v1, v2, v3;
glm::vec3 out_start = Rayz->ray_start;
glm::vec3 out_end = Rayz->ray_end;
glm::vec3 out_direction = normalize(out_end - out_start) * 10000.0f;
glm::vec3 result;

for (int i = 0; i < Terrain1->getTerrainData()->getIndices().size(); i += 3)
{
v1 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 0]].position;
v2 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 1]].position;
v3 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 2]].position;

if (glm::intersectRayTriangle(out_start, out_direction, v1, v2, v3, result)){
glm::vec3 fresult = result.x*v1 + result.y*v2 + result.z*v3;
cout << fresult.x << " " << fresult.y << " " << fresult.z << endl;
break;
}
}
}


EDIT: I have solved the problem. For everyone that has problems with using the glm::intersect functions keep in mind this . The function does indeed return barycentric coordinates but the format is as follows


glm::vec3 fresult = result.x*v1 + result.y*v2 + (1.0f - result.x - result.y)*v3;

Where v1,v2,v3 are the triangle vertices and result.x and result.y are the actual barycentric coordinates but to calculate the barycentric.z you have to substract barycentric.z = (1.0f - result.x - result.y).
In other words result.z straight out of the glm::intersect is actually the parameter t below


ray_origin + t*ray_direction = result.x*v1 + result.y*v2 + (1.0f - result.x - result.y)*v3;


Both statements above are equal , it is up to you to use one of them. Both of them should return the intersection point of a ray and a triangle


ray_origin + t*ray_direction = result.x*v1 + result.y*v2 + (1.0f - result.x - result.y)*v3;