Faking gl_FragCoord?

Since gl_FragCoord does not work properly on AMD hardware I need to fake it. I am having a little trouble visualizing this.

Let’s say texcoord is the vec2 I want to write out for gl_FragCoord. The upper-left corner of the screen would be (0,0,z), and the bottom-right corner would be (1,1,z).

Here is my light volume vertex shader:

varying vec3 texcoord;

void main(void) {
	gl_Position = ftransform();
	gl_FrontColor = gl_Color;
	texcoord=ftransform()
}

Now how do I turn the screen position into the gl_FragCoord value?

I’d check out the “Shader Inputs” heading in section 3.11.2, and section 2.11 for the actual transformations, in the 2.1 spec. Seems fairly straight forward, but a bit tedious to repeat here.


varying vec4 texcoord; // notice it's vec4, fix the vtxshader
void main(){
     vec2 tmp_scr1=vec2(1280.0/2.0,720.0/2.0);
     vec2 tmp_scr2=vec2(1280.0/2.0+0.5,720.0/2.0+0.5);
     vec2 fake_FragCoord = (texcoord.xy/texcoord.w)*tmp_scr1+tmp_scr2;  
}

btw gah this board’s “cannot resolve host, as you’re behind a firewall. Your whole post is gone now haha”. A dozen times already >_>

I had this, but because I used http://opengl.org/… instead of the more correct http://www.opengl.org/…

Nah, I access via http://www.opengl.org/… , but forget to turn-off the Sunbelt/Kerios firewall before posting here. At least if some javascript or whatever didn’t force-delete text on history.back()…

I tried this for the vertex shader, and it is very close to being correct…not sure why it isn’t completely correct though. There are some lighting offsets and glitches. It’s hard to explain. This is the same as your code, just a little simpler.

It looks like the interpolation between the vertices is making the image “wavy”. Here’s a video:
http://www.leadwerks.com/post/fragcoord.wmv

uniform vec2 buffersize;

varying vec2 fake_FragCoord;

void main(void) {
	gl_FrontColor = gl_Color;
	vec4 pos = ftransform();
	fake_FragCoord = ((pos.xy/pos.w)*0.5+0.5)*buffersize;
	gl_Position=pos;
}

Actually, I think this code is probably completely correct, but it just won’t work as a varying without distorting the values.

I didn’t experience the same issues, but when i got closer to the light-volumes they distorted pretty hard. In the Nvidia-XMas-Tree-Demo the following code was used (at least the same principle), which worked for me too (on my 8600 GT. I don’t know if it works on AMD-Cards too).


uniform vec2 buffersize;
varying vec2 fake_FragCoord;

void main( ) {
   gl_Position = ftransform( );
   gl_Position /= gl_Position.w;
   fake_FragCoord = (gl_Position.xy * 0.5 + 0.5) * buffersize;
}

In fact it’s the same, but only if I don’t do the perspective division in the vertex-shader the distortions appear. If I do it myself everything works fine (except when you a in the light-volumes (but I didn’t test if these problems occur if I render only the backfaces). I don’t know why that works and I would be happy if someone could explain it^^.

EDIT: Forgot the .w and some brackets…
EDIT: I watched the video again… I did have the same issues.

So the verdict is your code does not work, right?

You should never read from output variables like that, BTW. Once you write to gl_Position, just assume you can never read back that value. You will get some really unpredictable glitches if you read from write-only variables.

It can be written at any time during shader execution. It may also be read back
by a vertex shader after being written.

According to the GLSL 1.20.8-specification. And this code works (at least for me). As long as you do the perspective division in the vertex shader yourself, it works. Otherwise it doesn’t work. I don’t know why…


void main( ) {
	gl_Position = ftransform( );
	gl_Position /= gl_Position.w;
	gl_TexCoord[0] = gl_Position * 0.5 + 0.5;
}

This is the code I use in my deferred-rendering-experiments for acessing the g-buffer-data.

uhm, varyings get interpolated in a perspective-correct-way !
So, you must do the perspective division in the frag shader ;). Or target only GF8+ and specify linear interpolation of the varying vector.

I have experienced errors in pixel shaders that only went away when I eliminated reading from output variables. These problems only arose in certain shaders, and went away when I eliminated the reads. So it might be something that you won’t notice until you are far into development.

Thanks for the advice, this works:

lightvolume.vert:

uniform vec2 buffersize;

varying vec4 fake_FragCoord1;

void main(void) {
	gl_FrontColor = gl_Color;
	vec4 pos = ftransform();
	fake_FragCoord1=pos;
	gl_Position=pos;
}

And then at the beginning of the pixel shader:

	vec3 fake_FragCoord;
	fake_FragCoord.x = ((fake_FragCoord1.x/fake_FragCoord1.w)*0.5+0.5)*buffersize.x;
	fake_FragCoord.y = ((fake_FragCoord1.y/fake_FragCoord1.w)*0.5+0.5)*buffersize.y;
	fake_FragCoord.z = 0.5*fake_FragCoord1.z/fake_FragCoord1.w+0.5;

So now my code works on AMD hardware. Never use gl_FragCoord unless you only target NVidia! The y component will be flipped on SOME AMD cards, with some drivers, sometimes only when rendering to an FBO. So even if it works on your test setup, you will have users complaining about upside-down images and other bad stuff.

if (hardware can’t read from output registers and shader reads from previously written output)
then (shader compiler writes output to temp, reads from temp, moves temp to output)
else (shader compiler is broken)

shader compiler is broken

Happens all the time.

while (driver has bug)
developer files bug

while (vendor has active bugs)
vendor fixes bug

iirc, the missing gl_FragCoord has been a bug in ATi’s drivers for many moons already.

while (driver has bug)
developer files bug

while (vendor has active bugs)
vendor fixes bug

Say whatever you like, but ATi has no interest in fixing bugs that aren’t in major OpenGL game products. So until Id or Blizzard start using glslang combined with FBOs and gl_FragCoord, I wouldn’t hold my breath no matter how many times you file a bug report.

Actually, AMD told me this would be fixed in Catalyst 8.10, due out this month, but I wanted to go ahead and write a workaround for now.

Looks like this was indeed fixed in AMD drivers 8.10.

But still not fixed for FireGL mobility cards; at least not for the current driver for the IBM W500 ThinkPad… :frowning:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.