I implementing the ray casting algorithm but I have a great trouble.
My basic idea was,
At CPU
1.Compute the nearest vertex of bounding box and the farthest vertex for determine the maximum distance for the ray traversal.
2.Send the Min and Man values
At GPU
1.In the shader, I am recovering the position the camera, then I calculate the ray direction as
vec3 dir= camera.xyz – gl_TexCoord[0].xyz;
and advance a positon along the ray.
2.Composite the final color
My problem is how to rendering the plane of image, I do as,
One option is not to use texture coordinates. If you just draw a quad over the entire screen, that will get you into the fragment program. There you need to have a camera position set from the CPU. You can then use the gl_FragCoord, what pixel you are at, to determine the direction of the ray. Then you just intersection the box inside your fragment shader.
I read a Peters tutorial and tried to follow the process. I am working with glsl and create the texture2D as buffer for the final imagen rendered.
I estimate the first sample with gl_TexCoord[0].xyz, and calculate the direction of ray as
direction = normalize(gl_TexCoord[0].xyz-camera.xyz);
The algorithm is similar as Real Time Volume Graphics.
Next, calling a function
render_buffer_to_screen(); //with square and texture2d coordinates
for show the result, I see one square (red blurred ), when i move a camera position, it behaves as one slice.
I would expect that the texture2D (frame buffer) behaves as image2D, such is updated by moving the camera.
Consider rendering only the back faces of the cube. When only the front faces are rendered, no fragments (and thus no rays) are generated when the viewer enters the cube.
I found I didn’t need to render front faces, either. I simple applied the raycasting shader to the front cube, passing a varying representing the interpolated screenspace coordinates from Vertex to Fragment shader, and using the previously-rendered back faces texture as ray end positions.