In which way the light illuminate the scene?

Suppose there is a spotlight and three objects in a scene.
How does the diffuselight of the light and highlight iluminate these objects?
or say when the spotlight is on, how does the light api calculate the lighting intensity on the surface of the objects ?(dot by dot or poxel by pixel)

The fixed function lighting calculates a colour for each vertex, which is then interpolated over the surface of the primitive to obtain the colour for each fragment.

Any result which can be achieved with fixed-function lighting can also be achieved using by performing lighting calculations in the application and specifying the resulting colours with glColor().

One consequence of the use of interpolation is that any point in the interior of a primitive cannot be any brighter than the brightest vertex. It doesn’t work well with point light sources which are close to the primitive (relative to the primtiive’s size) or with specular reflection with a high shininess value.

Code which uses shaders typically interpolates vectors (normal, light direction, eye direction) and evaluates the intensity (using e.g. Phong or Blinn-Phong models) for each fragment.

[QUOTE=GClements;1271522]The fixed function lighting calculates a colour for each vertex, which is then interpolated over the surface of the primitive to obtain the colour for each fragment.

Any result which can be achieved with fixed-function lighting can also be achieved using by performing lighting calculations in the application and specifying the resulting colours with glColor().

One consequence of the use of interpolation is that any point in the interior of a primitive cannot be any brighter than the brightest vertex. It doesn’t work well with point light sources which are close to the primitive (relative to the primtiive’s size) or with specular reflection with a high shininess value.

Code which uses shaders typically interpolates vectors (normal, light direction, eye direction) and evaluates the intensity (using e.g. Phong or Blinn-Phong models) for each fragment.[/QUOTE]

Thanks, now consider no interpolation. just only one pixel or dot on the surface of objects.
how does the api calculates the intensity respectliely for each object?
p-code,

glLightfv({one lights}, spotlight{only have diffusion, highlight with 60degree}, bluecolor, position(1,1,0,0)

3 objects(position(1,0,0,1);(2,0,0,1);(3,0,0,1));
enable light…

[QUOTE=reader1;1271535]Thanks, now consider no interpolation. just only one pixel or dot on the surface of objects.
how does the api calculates the intensity respectliely for each object?
[/QUOTE]
If you use glShadeModel(GL_FLAT) or a “flat”-qualified shader variable, the value for all fragments is taken from one of the vertices (by default, the last one, although this can be changed with glProvokingVertex() in OpenGL 3.2 and later).

Otherwise, there is always some form of interpolation. A specific point within a pixel is inverse-projected to the triangle’s barycentric coordinate system, and the vertex attributes are blended accordingly.

If you want more details, I suggest that you (attempt to) read the OpenGL specification, then follow up with specific queries in the event that some part of it is unclear.

fixed-functin

What is fixed function? this command like glLightfv, enable is belong to fixed function?
can it be used for shader?

[QUOTE=reader1;1271538]What is fixed function? this command like glLightfv, enable is belong to fixed function?
can it be used for shader?[/QUOTE]
The “fixed function pipeline” refers to the behaviour of OpenGL without shaders, which includes the behaviour of older versions which didn’t support shaders.

It’s “fixed” in the sense that it’s not “programmable”. You can change parameters and you can enable or disable specific features, but you can’t otherwise change the nature of the calculations; those are set in stone (well, in silicon).

Roughly, if a function is listed in the OpenGL 2 reference but isn’t listed in the OpenGL 3 reference, it’s probably related to the fixed-function pipeline.

thanks, I don’t need interpolation at present, I know its math implement. just want to know how ray track is?
or what relationship is there between the glLight and socalled Phong or Blinn-Phon method(model)?
or does the glLight function implement the phong model lihghting?
[ATTACH=CONFIG]1124[/ATTACH]

Roughly, if a function is listed in the OpenGL 2 reference but isn’t listed in the OpenGL 3 reference, it’s probably related to the fixed-function pipeline.

good ansewer, simple and clear very much.

The “fixed function pipeline” refers to the behaviour of OpenGL without shaders, which includes the behaviour of older versions which didn’t support shaders.

I know that fixed pipeline and its extensive shaders.

glLight() and glMaterial() set the constant parameters used in the Blinn-Phong shading model. The other parameters are the position and the normal, which are specified separately for each vertex (some of the material colours can be changed from constant parameters to per-vertex parameters using glEnable(GL_COLOR_MATERIAL) and glColorMaterial()).

When lighting is enabled, the fixed-function pipeline uses the Blinn-Phong shading model and the specified parameters to calculate a colour for each vertex.

[QUOTE=GClements;1271543]glLight() and glEnable(GL_COLOR_MATERIAL) and glColorMaterial()).

When lighting is enabled, the fixed-function pipeline uses the Blinn-Phong shading model and the specified parameters to calculate a colour for each vertex.[/QUOTE]

Great, We are conforming.
When lighting enable, the ffp(fixed function pipeline) takes over automaticlly the instruction, find intersection of rays and objects to execute Phone model algorithm (according to this ray tracing model to calculate the each pixel lighting intensity)? the result outputs to fbo then appear on the screen?

Just like some fft hardware, when you input fft command, then it will automatically calculate the result.

if we will not use phong model, we have to encode at shaders? and if we have an idea to trace the ray emission in different way, we have to write the codes in shaders that ignore glLight etc. fucntions?

The fixed-function pipeline performs the calculation for each vertex. Fragment colours are obtained by interpolation.

If you use shaders, you can perform the calculation for each fragment, which gives more accurate results. Or you can use any other shading model.

Per-vertex lighting can be performed by the application; just calculate vertex colours using whichever shading model you wish and pass the resulting colours via glColor() or glColorPointer().

Per-fragment lighting realistically requires the use of a fragment shader. You can use textures as light maps, but if the lighting is dynamic, regenerating textures each frame is expensive.

[QUOTE=GClements;1271545]The fixed-function pipeline performs the calculation for each vertex. Fragment colours are obtained by interpolation.

If you use shaders, you can perform the calculation for each fragment, which gives more accurate results. Or you can use any other shading model.

Per-vertex lighting can be performed by the application; just calculate vertex colours using whichever shading model you wish and pass the resulting colours via glColor() or glColorPointer().

Per-fragment lighting realistically requires the use of a fragment shader. You can use textures as light maps, but if the lighting is dynamic, regenerating textures each frame is expensive.[/QUOTE]

yes, yes, it calculates each vertex in ffp. and interpolate every pixel automatially inside fragment part.
and if we can use programable shaders we can achieve more of flexibility.
conclusion:, the calculation of intersection of ray emmision from lightn to the surface of object is finished in ffp unit.
and obtain light intensity of each pixel inside of the ffp(include fragment part). all this processings are automatically by driver of display card.
we can also use shader to enlarge this unction.
Thank you very much.

next I shall ask some of texture mapping.