Computing Fragment Coordinate in vertex shader

Hi there,

I am doing a fluid simulation and need texture lookup from a vertex shader. So i need to compute the fragment coordinate of a point in a vertex shader. I am doing this (modelview is identity) :

gl_Position = gl_ProjectionMatrix * vec4(VertexPosition, 1.0);
Position = (gl_Position.xy / gl_Position.w) * 0.5 + 0.5;
...
vec4 color = texture(mySampler, Position);

Which works great with ATI card (HD 5870) but failed on NVIDIA card (GTX 580), drivers are up to date.
The shaders are compiled, but there is a precision error with the nvidia card. In order to debug, in the fragment shader i compare my computed fragment position with the real one and when some points have some specific coordinates my computed fragment position is one pixel away from the real one.
Anyone knows the real computation done by the graphics card ?
I just want to do the same ^^

Thanks

Take a look at this.

“The precise keyword has much more influence on the optimizations done by NVIDIA’s GLSL compiler than by AMD one. Why such a difference with Radeon boards? Is there a bug somewhere?”

Nope, I think you just got lucky. We do honor the precise keyword. However, it just tells us to not optimize the expression. It’s possible that we missed a potential optimization that the NVIDIA compiler was able to do, or that we optimized in a way that didn’t affect the result. However, it’s totally possible that this could have affected our compiler, or that we would implement more aggressive optimization in the future and that it would have broken your application.

Thank you very much.
When i saw your link i thought it was the solution to my problem but unfortunately this is not the case :frowning:
I modify my code like this :

gl_Position = gl_ProjectionMatrix * vec4(VertexPosition, 1.0);
precise Position = (gl_Position.xy / gl_Position.w) * 0.5 + 0.5;

vec4 color = texture(mySampler, Position);

Shaders compiled but nothing change.

In order to check if my lookup is done at the good place, i draw a grid of points in a texture (with a fbo) in the first pass. And in the second pass i draw the points exactly in the same configuration in a second texture. In my vertex shader (i can’t do it in the fragment shader for some reasons) i do a texture lookup in the first texture and draw a green point if i found data and a red point if not. The result is a grid with a lot of green points and in some specific case (when i slowly move the grid) i have line/column of red points because i think the computation is not exactly the same and my computed point is one pixel away from the rasterized point.
And i only have this problem on NVIDIA card :frowning:

It’s seems that when i compute :

precise Position = (gl_Position.xy / gl_Position.w) * 0.5 + 0.5;

The result is X + epsilon when my point is very close to the “edge” between two pixel and the result is X - epsilon when the graphics card compute it (or vice versa) :frowning:

I think the way i compute the Position is great and precise, but i guess the way nvidia compute the 2D point is optimized and not very precise but acceptable. So my workaround is to force the rasterizer to draw the point where i want (using my computation) to be sure i will find my point where it has to be and of course it works.

Thanks for your help McLeary.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.