Hi all - i hope you can help me again
I am trying to manually compute gl_FragCoord to do a lookup in a depth texture. I want to do this manually since i later need to alter gl_Position in the vertex shader but still want to perform depth lookup at the old fragment position. (its a gpgpu combined with normal pipeline thingā¦).
Ok, here is what i doin the VS:
varying vec4 myPos;
void main( void )
{
gl_Position = ftransform(); //changed later
myPos=ftransform();
}
and in the FS:
uniform vec2 viewportSize; // size of viewport in pixels
varying vec4 myPos;
void main( void ) {
vec4 myFragCoord;
myFragCoord.xy = (0.5+(0.5*p.xy/p.w))viewportSize;
myFragCoord.z = 0.5+(0.5p.z/p.w);
myFragCoord.w = 1.0/p.w;
}
Unfortunately my computed myFragCoord slightly differs from the true one. For example, when rendering a plane slightly inclined in z direction (viewing direction), i get for some fragment
gl_FragCoord = [ 8.5, 8.5, 0.999943077564239501953125, 0.005894298665225505828857421875]
myFragCoord = [ 8.50000762939453125, 8.50000762939453125, 0.9999430179595947265625, 0.005894298665225505828857421875]
The w component is ok, but not xyz. Why is that? (drives me crazy all day since i cannot perform depth comparison with these deviationsā¦)
Note: i also read this post but i donāt understand the part of āinterpolating w_clip after division, e.g. gl_FragCoord.w=(1/w_c)_iā -> this seems no good idea and doesnāt work for me - or am i missing s.th.?