How does the fragment shader determine the value for each pixel?

I’ve been trying to figure this out for days. This is my fragment shader code in main:


vec3 specular=vec3(distance(Pos,lightpos)/8.0f);
vec3 result = (specular ) * Color;
outColor = vec4(result, 1.0f)

Pos is the vertex position which is outputted from the vertex shader:

"Pos = vec3(model* vec4(position, 1.0f));"

lightpos is just the position of the light.

This is the effect I get. The closer the light to the plane, the darker those pixels get:
[ATTACH=CONFIG]948[/ATTACH]

But why do I get this effect? All I do is pass the vertex position. Why do I get a gradient effect? It looks as if the fragment shader measures the distance from each pixel rather than the vertex, but Pos is the vertex position. I just don’t understand how the fragment shader does this? Can someone please explain this to me?

Here’s the question stated a little more clearly:
If Pos refers to the vertex position, and the brightness of the plane is proportional to distance of each vertex(Pos) from the light position, why does it look as if, instead, it’s proportional to the fragment position? How would my fragment shader code be different if I just want the brightness of the plane to be proportional to the distance of the vertex rather than the fragment? Basically what if I want this effect instead (the closer the light, the brighter color value for those vertices):
[ATTACH=CONFIG]949[/ATTACH]

Thanks

When you pass values from the vertex to the fragment shader these values will indeed get interpolated.
You can supress this interpolation with declaring Pos as flat:
flat out vec3 Pos;

But how exactly does that work? So here, the vertex shader finished processing vert1. Now it’s the fragment shader’s turn. Does on each run of the fragshade, the value of Pos change? I just don’t understand this.

Yes, the value changes for every fragment. They are interpolated out of the per vertex data and the location of the fragment in a triangle.

Maybe take a look at this:
http://www.geeks3d.com/20130514/opengl-interpolation-qualifiers-glsl-tutorial/

Hmm, ok. So just to make sure, for each 3 vertices forming a triangle received by the fragshader, when it runs for each pixel to fill that area, Pos will take on all the values that represent the positions of each pixel? And then repeat for every other 3 vertices?

It’s really not that complicated.
You provide data per vertex. These are positional data, texture coordinates, normal vectors and maybe more values.
By default this data is just interpolated accoss the primitive in a perspective correct manner (smooth). This holds for all kind of data!

Take for example texture coordinates. You only supply values for every vertex but what we actually need in the fragment shader are texture coordinates for that specific fragment. And thus opengl interpolates the coordinates for every fragment out of the values from the 3 vertices that build up the triangle.

The same holds, of course, also for normal vectors. This is how we actually get pixel based lighting nowadays. We supply values per vertex, opengl interpolates that data for the fragment.

Hmm, ok. I think I get it now. Thanks a lot.