per pixel lighting 😕

Can anyone explain per pixel lighting and if I want a sphere to be the source of light shining on the scene how do I do that?

Thanks

i don’t know much about per-pixel lighting but here it goes:
imagine you have a low resolution sphere. if you point a light at that sphere every face of it will have different portions of light depending if they are facing the light source or not. now if you subdivide the sphere a little more it will look better “lighted”, and so on until you reach a point where a face of the sphere is so little that it ocupies the space of a pixel. you have per-pixel lighting!

i don’t know much about per-pixel lighting but here it goes:
imagine you have a low resolution sphere. if you point a light at that sphere every face of it will have different portions of light depending if they are facing the light source or not. now if you subdivide the sphere a little more it will look better “lighted”, and so on until you reach a point where a face of the sphere is so little that it ocupies the space of a pixel. you have per-pixel lighting!

To start with, it would help to have a Geforce card of sometype(not essential, but it does make it faster).

Having a good understanding of buffers and openGL extensions is also nessesary.

real time per-pixel lighting, as far as I’ve seen, is a bit of a blag. There is no simple way to go “calculate each pixel’s colour and draw it”. As previously mentioned, you want every pixel to have its own lighting calculation. This is not practicle to use geometry due to speed. Instead you can use textures to encode normal vectors.

Think about it for a second. Using RGB textures, for each texel you have Three values with a range of 0 - 255. Subtract 128 from this and you’ll have -128 - 127. Scale this so that it goes from -1 to +1. Now you have three signed values. This means it is possible to encode a normal vector into a texture for use with bump mapping etc.

Its quite hectic to try and explain it, but ,check out the examples at www.nvidia.com, for some peoples examples who know more than I do.

A number of Generalisations can be made:

  1. The use of nvidia’s register combiners to perform the lighting and fog calculations will help speed up the application due to the fairly limited use of GL_ARB_multitexture. It will also give you extra control.

  2. The use of multiple rendering passes allows you to perform the lighting calculations as a series of additive equations. Heavy use of textures will be needed, bearing in mind you may have bump maps, gloss maps, env maps, and surface colour. These will need to be combined as you perform the rendering passes in order to give you your final surface.

  3. Results from the equations can be stored in unused buffers, eg auxillary buffers, alpha buffers etc.

I only know a limited amount on this subject, per pixel wasn’t quite what I wanted so I didnt go that far. Try and find as many examples as you can. Try and see whats going on, and ask more questions…