vec3 worldCoord(vec3 ); window to world FragCoord

vec3 worldCoord(vec3);
takes gl_FragCoord and returns a vec3 with world coords as determined by the interpolations from the vertex shader.

the fragment shader doesn’t have access to the view port dimensions needed to calculate this value (even though it uses them to product gl_FragCoord) making it an extra step to pass in the view port variables via an ‘in’ variable.

would be nice to have for calculations based on world vertex shader interpolated fragments.

takes gl_FragCoord and returns a vec3 with world coords as determined by the interpolations from the vertex shader.

That’s not possible.

First, even if you’re using the standard OpenGL matrices, there is no “world space”. There’s object space, which is what the vertex attributes store. There’s eye space, which is the space of vertex data after transform by the modelview. And there’s clip space, which is the space of the vertex data after transformation by both the modelview matrix and the projection matrix.

So OpenGL does not have the necessary information to go to world space. But let’s assume you meant eye space, since that’s typically where lighting happens.

Second, you do not need to use those matrices. GLSL has exactly one function that is dependent on the standard, fixed-function matrices: ftransform(). And that is only necessary if you need the GLSL vertex shader to be invariant with a fixed-function vertex processor. Outside of that, you can just do the matrix multiplications yourself.

And what if you’re using your own matrices? Without the standard matrices, the closest that OpenGL could give you is clip space. That is, the values you wrote from the vertex shader.

Lastly, why would you need it? OK, there are impostors. But outside of that, or a completely artificial example like this, what practical use is there for reverse transforming gl_FragCoord. There’s a practical use for reverse transforming a user-defined vec4 who’s values are defined as though it were gl_FragCoord. But the input vec4 would not have to be specifically gl_FragCoord.

And even that would require uniform resources.

OpenGL, and D3D for that matter, has been slowly discarding fixed-function stuff for flexible shader code. So why backtrack with something like this? It’s not going to be any faster than what you could do on your own.

making it an extra step to pass in the view port variables via an ‘in’ variable.

The viewport values do not change per-fragment, so there’s no point in making them vertex shader outputs/fragment shader inputs. Those should be uniforms. You could even have a simple uniform buffer set up for that, one that is common to every shader you use. It could also store other common things like the projection matrix and so forth.

I think it would need to be a simpler getNDC() (normalized device coordinates) function which would convert gl_FragCoord.xy to a [-1,1] range. It’d be up to you to pipe that through the inverse transform to get a world position, as core GL 3.2 and up no longer has builtin projection and modelview matrices.

Sorry, still working on definitions.

Meant clip space, the end gl_Position variable that is required to be passed out by the vertex shader. It’s easy to reverse the transformation from that point. What happens from the output of gl_Position to gl_FragCoord is the tricky part. (I realize that gl_FragCoords are interpolated values based on gl_Position)

being able to reverse this easily would make it much easier for dynamic lighting calculations and linear traces at the fragment level.

being able to reverse this easily would make it much easier for dynamic lighting calculations and linear traces at the fragment level.

And in what way is it not easy currently? It’s not automatic, but it’s far from difficult. It’s just math. And quite simple math at that.

Also, I don’t see how this helps lighting at the fragment level. Deferred rendering recomputes the clip-space position by using information from a variety of sources. The window-Z comes from reading the depth buffer. The window XY comes from the current gl_FragCoord. The window W has to come from a texture read. A function that only computes the clip-space position from the current gl_FragCoord would not be helpful here.

fair enough, understand the spec standards.

I went through the GLSL 150 spec sheet and the formula they use to generate gl_FragCoord isn’t listed in there. Best source seems to be the documentation on GLUunproject. This what turns up in almost all searches regarding the conversion process backwards. Just a note.

Habbit of working math both ways to check results and all that. :slight_smile:

I went through the GLSL 150 spec sheet and the formula they use to generate gl_FragCoord isn’t listed in there.

Of course not. Similarly, you will not find the functions for how blending works. Or viewports. Or how to define attribute values input to vertex shaders. Or anything that does not have to do with the shading language itself.

gl_FragCoord is generated by the OpenGL rasterizer. It’s definition therefore comes from the OpenGL rasterizer. So if you’re interested in knowing what the specific formula for gl_FragCoord is, you have to look in the OpenGL specification.

more good info, thank you Alfonse. I know I ask a lot of questions, just want you to know I appreciate the time you put into your answers.

There’s this thing, called “varying”. Vtx shaders pass info to the frag shader via it. Use it.
Search for “gl_FragCoord” in this forum, for its formula. (but you don’t need FragCoord for your idea)

Yep, I realize your points on this, last two posts I documented it again here and the conversions between. Realize this isn’t something that could ever implement due to the input required to define gl_Position.w if you wanted to go beyond normalized coords.

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=293687#Post293687

Vtx:


uniform mat4 mvp;
in vec4 glVertex;
out vec4 clippy;

void main(){
	gl_Position = mvp * glVertex;
	clippy = gl_Position; // divide it by .W if you want
}

Frag:


in vec4 clippy;
out vec4 glFragColor;

void main(){
	vec4 tmp = clippy; // divide it by .W if you want
	// do stuff with tmp or clippy
	glFragColor=vec4(tmp);
}

If I wanted “objectspace coord access” in the frag shader, then:

Vtx:


uniform mat4 mvp;
in vec4 glVertex;
out vec3 clippy;

void main(){
	gl_Position = mvp * glVertex;
	clippy = glVertex.xyz;
}

If I wanted “viewspace coord access” , then:
Vtx:


uniform mat4 mv,mp;
in vec4 glVertex;
out vec3 clippy;

void main(){
	vec4 vs = mv * glVertex;
	gl_Position = mp  * vs;
	clippy = vs.xyz;
}

Yeah of course that works if you derive the coords from your vertex shader. The situation from which I was referring is one in which you access the screen coords as a texture, therefore the calculated vertex data is not there.

For instance, if you’re picking and just have the screen coords and depth value. (Which, of course, isn’t a GLSL matter, but using it as an example)

Yeah, it isn’t a glsl matter, as it’s one-time-thing per frame, and needs to be calculated only over one fragment. So, glReadPixels, multiply by inverse(mvp) , divide by w.

It doesn’t require any special glsl built-in variables when you’re reading depth from texture, to reconstruct position (in whichever space), as you will only ever be doing it if you’re drawing a flat fullscreen quad/triangle, (deferred rendering). There, the gpu simply does not know how to reconstruct any position. You do it “manually”, by multiplying by a precomputed (by cpu) inverse(mvp), and dividing by .w . Or even better, multiplying the depth by a varying vec3, which you calculate in either vtx shader or cpu (for 3 or 4 verts).


in vec3 vsCameraDir;
in vec2 texCooord;

void main(){
	float depth = texture(texDepth,texCoord).x;
	vec3 viewSpacePosition = vsCameraDir*depth;
}

---------------------------------
// or if it's an arbitrary space, other than viewspace:

in vec3 wsWorldDir;
in vec3 wsWorldPos;
in vec2 texCooord;

void main(){
	float depth = texture(texDepth,texCoord).x;
	vec3 worldSpacePosition = wsWorldDir*depth + wsWorldPos;
}