Meaning of texture coordinates in "texture2D"

Hello!

I try to understand the following texture mapping shader :


/* vertex shader */
varying vec4 v_color;
varying vec2 v_texCoord;

void main()
{
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    v_color = gl_Color;
    v_texCoord = vec2(gl_MultiTexCoord0);
}

and


/* fragment shader */
varying vec4 v_color;
varying vec2 v_texCoord;
uniform sampler2D tex0;

void main()
{
    gl_FragColor = texture2D(tex0, v_texCoord) * v_color;
}

I don’t understand the meaning of the v_texCoord vertor. When we send data to opengl we associate to each vertex a texture coordinate with glTexCoord. This mean that during rasterisation, texture mapping depend of the three texture coordinates associated with the three vertices. So how the fragment shader can access the texture with only one coordinate that come from the vertex shader ?

Seems that I don’t really understand how the shaders works… For me, the following code :


    v_texCoord = vec2(gl_MultiTexCoord0);

Send to the fragment shader the texture coordinate of the last vertex that pass through the vertex shader …

It’s simple, for each vertexyou can have a zero, one, or multiple texture coordinates. For each primitive, e.g. a triangle made up of three vertices, fragments are generated that cover the approximate area the primitive occupies after projection. Values at the vertices are simply interpolated and the result of the interpolation is passed to the fragment shader. For instance the following code


    v_texCoord = vec2(gl_MultiTexCoord0);

simply takes the first of the possible texture coordinates and assigns it to the varying variable v_texCoord in the vertex shader, i.e. the interpolated value passed on to the next stage (e.g. the fragment shader). When the fragment shader is invoked for a fragment belonging to the primitive in question, v_texCoord in the fragment shader will be the texture coordinate that has been interpolated between the three vertices. Nowadays there can be multiple interpolation modes, with the default being perspective correct interpolation across the primitive being rendered.

Note that interpolation happens to all kinds of values written in stage n which are still accessible in stage n+m - not just texture coordinates. For instance, it’s common to also determine a vector from a vertex to a light source, interpolate this direction and the vertex normals across the primitive and do per-fragment lighting in the fragment shader using the interpolated light vector.

Thanks !

If I understand correctly, varying variables have multiple instances in the rendering pipeline :
-> one instance for each vertex of a primitive, so three assignment are performed by the vertex shader
-> one instance for the fragment shader where the value is interpoled from the variable value associated to each vertex

I’am right ?

Correct, although I don’t know about the term instance here. Having the same name for a variable in multiple shader stages is merely necessary to have a matching interface. Otherwise you’d need some kind of mapping between the output declaration and the input declaration which would make the language and compilers more complicated and probably less efficient. Given that and the knowledge that what’s assigned at vertices to the same variable will be interpolated, you know that you’re gonna have some value to in subsequent stages. How GPUs actually handle the storage and fetching of values for each invocation of each shader is, to my knowledge, not specified and is left to GL implementors. All the GLSL spec says is that input and output variables are “copied in” or “copied out” so values are probably preferably moved to a shader core’s local memory if possible.

BTW, in GLSL versions 1.31 to 430 you don’t have varying variables anymore. Instead the same concept is expressed with in/out qualifiers:


//vertex shader
...
out vec2 v_texCoord;
...

// fragment shader
...
in vec2 v_texCoord;
...

This gives you a visually more clear intuition of whats coming in and whats going out.

That’s perfect ! Thanks !

I will try to read more carefully the GLSL reference to understand the differences between the GLSL versions and how to handle them…

If you’re programming new code, drop everything you wrote and start at least with GL 3.3 core and GLSL 330. If you need to understand old code thats’s find - if you don’t, don’t bother with legacy stuff. BTW, we’re at GL 4.3 and GLSL 430 now. Can you imagine how you’ll feel in a few months when you realize that GL 2.1 and GLSL 120 don’t cut it? :wink:

Check out Alfonse’s tutorial.

You mean the Opengl 4.3 core is supported by almost all 2.1 drivers with their extensions ? GLSL 430 <-> GLSL 120 to ?

It’s the other way around: Every GL 4.3 capable driver, both from AMD and NVIDIA, fully supports every function from GL 1.2+. From the stone ages of OpenGL up until today. If you want to write code against GL 2.1 it’s technically ok to do so. However, imagine some new feature you want to use but isn’t written against earlier versions of the GL and uses functions only available to newer revisions. At that point you’d have to employ techniques from different GL revisions to make your code work, probably adding shaders written in GLSL 1.20 and GLSL 330 or 430 or whatever. When your code grows and you find yourself constantly mixing old, obsolete, less performant, less flexible concepts with new ones that were designed to replace them. It goes even further. For instance, you can make shading programs with a version higher than 130 behave compatible towards GLSL 1.20 and 1.30 (the latter deprecated but didn’t remove features. that happend with GLSL 1.40) and you can use removed built-ins like gl_ModelViewProjectionMatrix. Now as soon as you think, “how about using a core context”, none of you compatibility shader stuff will compile anymore and you’ll have to change every single shader using removed features. The same goes for the GL. As soon as you use a core context, none of the compatibility stuff will work. Now you might be wondering, why would I use a core context? Well, the thing is that you actually gain nothing out of using a core context except for enforced core profile compliance - which is nice since it forces you to do it right. But gains in terms of performance, like one might hope for, have yet to be proven.

If you didn’t get all that, rest assured: Most of us here don’t either! It’s a frickin’ mess. All this could have been avoided if the ARB would have settled for real deprecation and actual, mandatory removal and breaking of all GL 2.1 backwards compatibility. But business interests of several companies, i.e. members of the ARB, prevented this:

While all major vendors phase out older hardware at some point, i.e. they provide legacy drivers which still support some old GPU but the following drivers don’t, they still carry everything that made up GL revisions from GL 1.2 up until today in their drivers. Their promise is basically: “You buy new hardware and we make sure everything works as expected.” This makes the whole deprecation mechanism a rusty, dull sword. You might ask, “how far can this go? why would they do this to themselves? isn’t driver maintenance and development becoming harder and harder with each revision?”. Probably yes, but obviously the benefits outweigh the drawbacks.

The “solution” for this was to introduce the compatibility and core profiles. The selection of the profiles is done at context creation, by passing a flag to designate compat and core contexts. With a compat context, everything stays as it is. With a core context however, you’re forced to use GL3.1+and GLSL core functionality, i.e. what’s specified in the GL3.1+ core and the core parts of the GLSL 1.40 specifications.

To avoid all this: Just try to be as current as possible. If you’ve got GL 4 capable hardware at your disposal, learn and use GL 3.3 to 4.3 and GLSL 330 to 430, if you’ve got only GL 3 hardware, learn GL 3.3 and GLSL 330. GL3 is forward compatible, so everything you write in GL3 is still well and good in GL4. The same goes for minor versions, GL 3.1 stuff is still valid in GL 3.3, GL 4.0 stuff is still valid in GL 4.2.

Only if you’ve got no way to write GL3+ code should you fall back to GL 2.1.

Although this is not really reassuring … :slight_smile: many thanks to you for your clarification ! I will try to use GL3.1 as main opengl version and provide compatibility for older GL version as far I found the motivation.

Thanks again for your great replies !

If you go for GL3, use GL 3.3 instead of 3.1 (and GLSL 330 instead of 1.40). After all, why would you want to ignore functionality (i.e. stuff that came with 3.2 and 3.3) your hardware fully supports it anyway with any current driver?

I have just two openGL implementation on my system :
-> my geforce 6600 driver witch support GL 2.1 and GLSL 1.20
-> the mesa driver that emulate GL 3.0 and GLSL 1.30

So in reality I’m limited to GL3.0 for debugging and GL2.1 for performance analysis…

FYI: you can buy GL 4.3 hardware for like $40-50; it’ll even be faster than your GeForce 6600.

Also, there are a lot of additions from higher GL versions that aren’t hardware-specific. You can use them via core extensions, so that when you upgrade your hardware, you won’t even have to change your code.

Thanks ! But the geforce 6600gt is a agp card not a pci-express one … I need to update my whole configuration :frowning:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.