Hi,
I’m trying to do volume raycasting on the GPU by rendering the front faces of a bounding box and mapping 3D texture coordinates onto their vertices. In the fragment shader I retrieve the texture coordinate and use that as a starting point to sample the data along a given viewing ray. If I step beyong the ray length the raycasting is finished for that fragment.
To compute the ray direction and length I use Krueger & Westermann’s method of first rendering the back faces of a color cube to an offscreen unsigned byte texture (using an FBO), then its front faces and subtracting the color values which gives me the ray vector. The normalized vector I store in another offscreen texture, but in this case it’s a floating-point texture. The (unnormalized) vector length I store in the alpha channel. Because the vector length can be greater than 1 I (think I) need a floating-point texture. A regular ubyte (GL_RGBA8) texture seems to clamp the values between [0,1].
Now comes my problem. If I use only ubyte textures (even for storing the ray directions and lengths) this of course gives me a wrong visualization because the ray length never exceeds 1 but at least the performance of rotating the dataset is fine. If I mix ubyte and float textures however performance drops significantly.
I use 2 framebuffer objects and alternately bind one of them before I do my rendering. So, first I use the ubyte FBO and render the back faces of the color cube, the I switch to the float FBO to render the front faces of the same color cube and calculate the ray directions and lengths. Then comes the actual raycasting pass that is again rendered to a offscreen ubyte texture using the ubyte FBO. The final image I display on screen.
My question, why does the performance drop so much when I alternate ubyte and float FBO’s?
I didn’t attach all of my code but if someone can help I’ll be glad to mail it to him or her.
Thanx!
Rlp