I have a application which calculates Ambient Occlusion and the bent normal. I set these values as a vec4 (bent normal x,y,z and w is accessibility) into texture coordinate set 0 with glMultiTexCoord4f(GL_TEXTURE0_ARB, …). The vertex program just send the values to the fragment program:
On GeForce (and probably Radeon), texture coordinates are “normalized” when passed to fragment shader. So, if your vertex shader will produce texture coordinates like these:
{1.5, 3.5}, {2.2, 3.8}
Your fragment can get, for example:
{-0.5, 0.5}, (0.2, 0.8}
I’m not sure if R coordinate is modified this way.
A suggestion with varying variable was a good one - you don’t have to change the code - only shader wil change:
Originally posted by k_szczech: On GeForce (and probably Radeon), texture coordinates are “normalized” when passed to fragment shader.
What? I’ve never seen it to be happened so…
Originally posted by k_szczech: I’m not sure if R coordinate is modified this way
Well, atually he’s talking about Q coordinate.
Originally posted by k_szczech: [quote]What? I’ve never seen it to be happened so…
I did. Moving texcoords closer to (0, 0) gives better precision.[/QUOTE]This is obvious - floats are more precise near zero . I mean, if you set some value to TEXCOORD0 (100, 1000 - doesn’t matter), in fragment program you would see this value, interpolated across triangle, and it wouldn’t be clamped or normalized, as COLOR does.
No, it won’t be clamped. But it may happen that an integer offset will be introduced. This offset is identical for all vertices of current polygon.
You can see that in feedback mode. Although feedback mode is usually (always?) done by driver, it suggests that GPU may (but not necessarily will) do similar thing.
Originally posted by k_szczech:
Lodder, If you haven’t solved it yet then please provide more details.
It works now when I put the values into color instead of texture coordinate.
For now this is not a problem, I can switch the position of the values. However, I am interested where this behaviour (don’t want to call it “error” now, because I am not sure if it is) might come from.
Texture coordinates are sent with glMultiTexCoord4f(GL_TEXTURE0_ARB, …) and colors with glColor4f(). I am not sure, which information you need besides this?
I tried this now on my x86 Linux box with 100.xx drivers and there it is the same
Originally posted by k_szczech: No, it won’t be clamped. But it may happen that an integer offset will be introduced. This offset is identical for all vertices of current polygon.
I can imagine it to be happening via fixed pipeline, but with programmable pipeline everything must OK, and such a behaviour is not correct according to vertex_program/fragment_program specification.
You said that you do not have access to the underlying renderer. It is possible that it modifies the texture coordinates for some reason (e.g. without backup or with incorrect restore using 3 component variant)?
You can use the GLIntercept to check which commands are really send to the OGL.
Originally posted by Komat:
[b] You said that you do not have access to the underlying renderer. It is possible that it modifies the texture coordinates for some reason (e.g. without backup or with incorrect restore using 3 component variant)?
You can use the GLIntercept to check which commands are really send to the OGL. [/b]
I talked to the author with the GLIntercept log and he found an error in his code
However, the AO code is working. Now I can add the environment map (hopefully this works better :rolleyes: )
The original code ought to work. With the programmable pipe, texture coordinates are just a handy varying variable that OpenGL provides. There is nothing special that happens between the vertex and fragment shaders other than interpolation. The projection division that has been hinted at only happens when actually looking up a texture, and only when using tex2DProj and similar.
I believe the options are: a bug in the program, a bug in the driver, or the rendering framework is doing some magic behind the scenes that breaks things.
I can imagine it to be happening via fixed pipeline, but with programmable pipeline everything must OK, and such a behaviour is not correct according to vertex_program/fragment_program specification.
That’s true. It’s also true that drivers have bugs, so it could happen under some circumstances. Sometimes it’s worth checking if everything works as expected before looking for error.
Anyway, this is not the problem as these shaders work with .x coordinate but not with .w
I think Komat pointed out the right direction.