Hi ,
How can I convert in fragment shader each pixel from gl_fragcoord x,y,z respectively to my “3d space world coordinates” x,y,z ?
Thanks a lot in advance
Hi ,
How can I convert in fragment shader each pixel from gl_fragcoord x,y,z respectively to my “3d space world coordinates” x,y,z ?
Thanks a lot in advance
gl_FragCoord is an input variable that contains the window relative coordinate (x, y, z, 1/w) values for the fragment. This value is the result of fixed functionality that interpolates primitives after vertex processing to generate fragments.
In the vertex shader, we typically compute:
gl_Position = (gl_ModelViewProjection * gl_Vertex);
The last thing the Vertex Shader stage does is a divide by w, or gl_Position.w to be precise.
So, we could say that:
gl_FragCoord.xyz = gl_Position.xyz/gl_Position.w;
and
gl_FragCoord.w = 1/gl_Position.w;
So dividing gl_FragCoord.xyz / gl_FragCoord.w
yields the original gl_Position.xyz.
Multply gl_Position.xyz by the inverseProjection matrix should give you back the eye-space poistion,
or multiply gl_Position.xyz by the inverseModelViewProjecion should give you world/model space position, eg gl_Vertex.
Give that a try and let me know if my maths is wrong!
Thanks!
But my problem is that I can’t use vertex coordinate since it’s different resolution.
I’m trying to do per pixel shader for each pixel.
I’m trying to use Alfonse’s tutorial (which is great) code:
vec3 CalcCameraSpacePosition()
{
vec4 ndcPos;
ndcPos.xy = ((gl_FragCoord.xy / windowSize.xy) * 2.0) - 1.0;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
return vec3(clipToCameraMatrix * clipPos);
}
But I have some questions about calculation here:
1.How can we assume that w = 1?
2. Are we assuming here that our world space is [-1,1]?
ndcPos.xy = ((gl_FragCoord.xy / windowSize.xy) * 2.0) - 1.0;
Thanks
Having just read back my own post, I think I’ve missed out a step in the process.
The gl_FragCoord is window space coordinates, so that means the gl_Position values must have been multiplied by the viewport to put the values in the range 0…window width, 0…window height.
These window coordinates need to be converted to NDC coordinates [0…1] and that must be undone if we want to convert back to the original gl_position given gl_FragCoord.
The following code should give you something close.
vec3 PositionFromDepth_DarkPhoton(in float depth)
{
vec2 ndc; // Reconstructed NDC-space position
vec3 eye; // Reconstructed EYE-space position
eye.z = near * far / ((depth * (far - near)) - far);
ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;
eye.x = ( (-ndc.x * eye.z) * (right-left)/(2*near)
- eye.z * (right+left)/(2*near) );
eye.y = ( (-ndc.y * eye.z) * (top-bottom)/(2*near)
- eye.z * (top+bottom)/(2*near) );
return eye;
}
which simplifies to…
eye.x = (-ndc.x * eye.z) * right/near;
eye.y = (-ndc.y * eye.z) * top/near;
And note that typically you don’t store Z_ndc NDC-space depth (-1…1) in a depth texture. You usually store Z_viewport – that is viewport-space depth (0…1, or whatever you set glDepthRange to). But undoing that mapping to get to Z_ndc is easy.
Referring to the projection matrix, for a perspective projection you have:
z_ndc = z_clip / w_clip
z_ndc = [ z_eye*gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z ] / -z_eye
The 2nd step presumes w_eye = 1. Solve the above z_eye, and you get:
float z_eye = gl_ProjectionMatrix[3].z/(-z_ndc - gl_ProjectionMatrix[2].z);
Typically your glDepthRange is 0…1, so z_ndc = z_viewport * 2 - 1, so plugging that in…
float z_eye = gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);
That’ll get you from viewport-space Z to eye-space Z, for a perspective projection.
But I have some questions about calculation here:
1.How can we assume that w = 1?
2. Are we assuming here that our world space is [-1,1]?
ndcPos.xy = ((gl_FragCoord.xy / windowSize.xy) * 2.0) - 1.0;
Which .w are you talking about?
In NCD space, w will be 1 when you ‘unproject’
world space is [-1,1]?
Clip space is -1,1; not world space.
Since our projected scene is converted to clipspace, then yes the entire world eventually gets reduced down to -1,1.
Thanks a lot!
It will take me a while now to “digest” it and fix and test my code.
1.How can we assume that w = 1?
Because NDC-space W is clip-space W divided by clip-space W.
X/X == 1 for any non-zero X.
[quote=Alfonse Reinheart]
X/X == 1 for any non-zero X.
Please help me to understand this explanation about gl_FragCoord
opengl.org/sdk/docs/manglsl/xhtml/gl_FragCoord.xml
This article even doesn’t state what the glFragCoord range is. I thought it is [0,1] , but then how can it be ?
“…the (x, y) location (0.5, 0.5) is returned for the lower-left-most pixel in a window”
Thanks in advance
This article even doesn’t state what the glFragCoord range is.
It says, “window relative coordinate”. That means window space. OpenGL window space is in pixel coordinates, with a lower-left origin. So (0, 0) represents the bottom-left-most pixel. (2, 4) is two pixels to the right and 4 pixels up from that.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.