PDA

View Full Version : volume rendering , ray casting



lobbel
06-12-2009, 07:04 AM
Hello,
i found 2 example shader codes for volume rendering using
ray casting. I have a question for getting the ra start position.
example A:


// find the right place to lookup in the backside buffer
float2 texc = ((IN.Pos.xy / IN.Pos.w) + 1) / 2;
// the start position of the ray is
float4 start = IN.TexCoord;


eample B:


vec3 rayStart = texture2D(RayStart, gl_TexCoord[0].st).xyz;
vec3 rayEnd = texture2D(RayEnd, gl_TexCoord[0].st).xyz;


My question is, why do they calculate following
((IN.Pos.xy / IN.Pos.w) + 1) / 2
in example A but not in example B =

regards,
lobbel

lobbel
06-15-2009, 12:48 PM
solved

toneburst
06-18-2009, 04:02 AM
Hi lobbel,

I had some fun with the same code, but rendering a surface equation dynamically, rather than a static volume texture.

There are some clips and more screenshots on my blog (http://machinesdontcare.wordpress.com/2009/06/15/gpu-raycasting-now-working/) , if you're interested.

http://machinesdontcare.files.wordpress.com/2009/06/tb_trier_borg_150609_03.png

http://machinesdontcare.files.wordpress.com/2009/06/tb_trier_borg_150609_08.png

http://machinesdontcare.files.wordpress.com/2009/06/tb_trier_borg_150609_10.png


I wonder if you've put any thought into optimisation of the basic algorithm?
I have some ideas, but I'm not sure how practical they are.

Cheers,

a|x

dletozeun
06-18-2009, 06:52 AM
Hello,
i found 2 example shader codes for volume rendering using
ray casting. I have a question for getting the ra start position.
example A:


// find the right place to lookup in the backside buffer
float2 texc = ((IN.Pos.xy / IN.Pos.w) + 1) / 2;
// the start position of the ray is
float4 start = IN.TexCoord;


eample B:


vec3 rayStart = texture2D(RayStart, gl_TexCoord[0].st).xyz;
vec3 rayEnd = texture2D(RayEnd, gl_TexCoord[0].st).xyz;


My question is, why do they calculate following
((IN.Pos.xy / IN.Pos.w) + 1) / 2
in example A but not in example B =

regards,
lobbel

In the first code, the texture coordinates are computed from the fragment position which need to be perspective divided and translated to the [0 1] interval instead of [-1 1] one after perspective divide.

In the second code, he seems to fetch a 3D world position (not sure here since I do not the implementation details). I assume a 32 bits or 16bits floating point texture is used here so there is no need to transform the fetched data.

toneburst
06-18-2009, 02:44 PM
Have a look at this thread on raycasting, lobbel:

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=258916&page=all

It doesn't talk about the OpenGL setup, but the last lot of vertex/fragment shader code worked for me, in the end. You'd need to swap the surface code for lookups into a 3D texture, but the rest of the code should work without too much tweaking.

I should add that the shader runs on a backface-culled cube, taking the backface-culled cube as RayEnd input.

Hope that helps a bit.

a|x