Hi,
I am doing volume rendering of a 3D texture using a basic ray casting algorithm.
Here is how it works so far : the fragment shader iterates along a ray directed from the eye to the scene, taking samples from the 3D texture at each step. Frag color is generated based upon the data collected along the ray (for example, we may want to display the maximal intensity along the ray).
As long as we just want to scan the whole 3D texture, that’s easy : we send a cube of GL_TRIANGLES through the pipeline, and the vertex shader computes ray entrance location (or ray exit but we use face culling to discard back faces). Then ray exit can be computed easily in the fragment shader based on ray entrance.
But I would like to display only a subset of the whole 3D texture, defined by some arbitrary bounding geometry. The idea is to be able to discard some parts of the 3D texture without having to change the texture itself. So, instead of sending a cube of vertices to the pipeline, one could send, let’s say, a pyramid, and the iteration that takes place in the fragment shader would stay into the bounds of that pyramid.
As long as the 3D texture subset is a convex polyhedron, it’s fine : front and back faces locations can be first stored (as colors) in two separate FBOs, and then passed to the fragment shader as 2D textures.
Where it gets tricky is when the geometry is not a convex polyhedron, because then a single ray can enter and exit the geometry several times :
illustration : [ATTACH=CONFIG]1477[/ATTACH]
So my question is : is there a convenient way in OpenGL to do such a “piecewise ray casting” ?
There are too major issues I could not overcome :
[ul]
[li]How to compute the - possibly multiple - [ray entrance, ray exit] pairs using the GPU ?
[/li][li]How to transmit those pairs to the fragment shader so that, ideally, a single fragment shader execution can process a complete discontinuous ray ?
[/li][li]
[/li][/ul]
And if none or those are possible, is there another approach I could try ?