Hi,
I’m having some difficulty with this scenario. I have size 1 cube where the vertices = colour = texture coordinates. I render the front faces of the cube to an FBO texture and pass that texture to a shader. Then I render the front faces of the cube again using the following shader:
varying vec4 pos;
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
pos = gl_Position;
}
uniform sampler2D frontFace;
varying vec4 pos;
void main()
{
vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0; // find the right place to lookup in the backside buffer, NDCS divide by w
vec3 coord1= texture2D(frontFace, texc).xyz;
vec3 coord2= gl_TexCoord[0].stp;
gl_FragColor = vec4((coord1-coord2)*1000,1);
}
What I notice is that there’s a small difference between the texture coordinates that openGL computes and the ones I’m looking up in the texture rendered earlier. I guess it can be attributed to linear interpolation in the 2D texture. Or I may be making a mistake in my implementation. In the end, the rendering quality is worse when looking up the coordinates in a texture vs. using the cube’s texture coordinates. Any help on how I can look up the right coordinates in the pre-rendered texture?
Background on what I’m trying to do:
I’m trying to speed up volume rendering by skipping empty spaces. I’m basing it on techniques described in this paper:
http://www.cescg.org/CESCG-2005/papers/VRVis-Scharsach-Henning.pdf
where basically the front/back faces you render form a tighter bounding box around the volume. So I do 3 passes, first two render front/back faces, the last pass renders an overall bounding box front faces and uses the shader which does the raycasting.
Thank you for any help.