Since gl_FragCoord does not work properly on AMD hardware I need to fake it. I am having a little trouble visualizing this.
Let’s say texcoord is the vec2 I want to write out for gl_FragCoord. The upper-left corner of the screen would be (0,0,z), and the bottom-right corner would be (1,1,z).
I’d check out the “Shader Inputs” heading in section 3.11.2, and section 2.11 for the actual transformations, in the 2.1 spec. Seems fairly straight forward, but a bit tedious to repeat here.
Nah, I access via http://www.opengl.org/… , but forget to turn-off the Sunbelt/Kerios firewall before posting here. At least if some javascript or whatever didn’t force-delete text on history.back()…
I tried this for the vertex shader, and it is very close to being correct…not sure why it isn’t completely correct though. There are some lighting offsets and glitches. It’s hard to explain. This is the same as your code, just a little simpler.
I didn’t experience the same issues, but when i got closer to the light-volumes they distorted pretty hard. In the Nvidia-XMas-Tree-Demo the following code was used (at least the same principle), which worked for me too (on my 8600 GT. I don’t know if it works on AMD-Cards too).
In fact it’s the same, but only if I don’t do the perspective division in the vertex-shader the distortions appear. If I do it myself everything works fine (except when you a in the light-volumes (but I didn’t test if these problems occur if I render only the backfaces). I don’t know why that works and I would be happy if someone could explain it^^.
EDIT: Forgot the .w and some brackets…
EDIT: I watched the video again… I did have the same issues.
You should never read from output variables like that, BTW. Once you write to gl_Position, just assume you can never read back that value. You will get some really unpredictable glitches if you read from write-only variables.
It can be written at any time during shader execution. It may also be read back
by a vertex shader after being written.
According to the GLSL 1.20.8-specification. And this code works (at least for me). As long as you do the perspective division in the vertex shader yourself, it works. Otherwise it doesn’t work. I don’t know why…
uhm, varyings get interpolated in a perspective-correct-way !
So, you must do the perspective division in the frag shader ;). Or target only GF8+ and specify linear interpolation of the varying vector.
I have experienced errors in pixel shaders that only went away when I eliminated reading from output variables. These problems only arose in certain shaders, and went away when I eliminated the reads. So it might be something that you won’t notice until you are far into development.
So now my code works on AMD hardware. Never use gl_FragCoord unless you only target NVidia! The y component will be flipped on SOME AMD cards, with some drivers, sometimes only when rendering to an FBO. So even if it works on your test setup, you will have users complaining about upside-down images and other bad stuff.
if (hardware can’t read from output registers and shader reads from previously written output)
then (shader compiler writes output to temp, reads from temp, moves temp to output)
else (shader compiler is broken)
Say whatever you like, but ATi has no interest in fixing bugs that aren’t in major OpenGL game products. So until Id or Blizzard start using glslang combined with FBOs and gl_FragCoord, I wouldn’t hold my breath no matter how many times you file a bug report.