Specular reflections

Hi guys,

  • Aim: Here is what I am trying to do: imaging you have a print in your hands, and you view it straight under a light source. As you move and rotate it in all directions, the specular component affects what the colours of the image. I am trying to model this specular component very accurately.

At the moment, I have written the code that displays an RGB image read from a file, and allows the user to view under all sorts of angle. The way I am doing this is by using texture mapping. More specifically, as textures can only be 2^m square (128x128 for my graphic card), I divide the original image into several quadrants, create a quad for each of them, and apply the corresponding texture (i.e. the corresponding area of the original image) on each of them. This all fast and good, but I am now moving onto the next stage.

  • Question 1: I am now trying to implement the specular reflection. I know that OpenGL provide pre-defined models, but that is not good enough as I am aiming for high accuracy. So my question would be: what are the choices for implementation, and what is the best method in your opinion? [Please remark this is an open question by design, I have already started to look into this problem, but I phrased the question in this way not to impose any restrictions on the possible answers].
    More specifially, I would need something that:
    • allows me to use N light sources (especially situated in different locations, but could also be of different colours)
    • is flexible, as several reflection models could be employed (Phong and co.)

Question 2: The research I’ve done on the topic made me think that shaders might be the solution I am looking for. Is is so?
However, I am by no means an expert, so I am rather confused as how this could be applied. From what I gathered, I believe there are two kind of shaders, vertex and pixel, so which one would be more suitable for the lighting operations I am trying to implement? And can they fit with the requirements I specified above, i.e. N light sources? Last but not least, does somebody know of tutorials or examples that illustrate this kind of operations?

Many thanks

Alexis

Shaders (vertex+pixel) are the best answer. Very flexible, you can do as many lights you want (well almost, in fact there are restrictions in the number of instructions and loops etc)

Then it depends on your hardware support…
If really you can’t use more than 128x128 textures on your graphic card … I guess you are out of luck :smiley: (is that a 3dfx voodoo 1 ???).

GLint texSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &texSize);

There are all sorts of shaders types and versions, the modern choice is GLSL (high level, OpenGL 2.O).
Older, ASM like :
http://oss.sgi.com/projects/ogl-sample/registry/ARB/vertex_program.txt
http://oss.sgi.com/projects/ogl-sample/registry/ARB/fragment_program.txt
Then you have vendor-specific extensions etc.

For tutorials and examples :
http://www.lighthouse3d.com/opengl/glsl/index.php?shaders

Basically, search google for “opengl per-pixel lighting”

many thanks for the reply.

Originally posted by ZbuffeR:
[QB]Shaders (vertex+pixel) are the best answer. Very flexible, you can do as many lights you want (well almost, in fact there are restrictions in the number of instructions and loops etc)

I’d like to ask more specific questions on how the program’s architecture is going to be affected if I use shaders. Is the following approach correct: define the quads, the textures on them, the diverse lights, their colours and their positions in the standard OpenGL way, and then implement a pixel shader (overload the standard method for fragment operations?) to perform per-pixel lighting computation?

If so, in the pixel shader implementation, does OpenGL provide access to values computed from the previous stage, such as light vectors, viewing direction vectors, light colours, and fragment colour (I believe it is called a pixel is called a fragment at this stage), from which I could perform my own set of operations to determine the final pixel colour?

If really you can’t use more than 128x128 textures on your graphic card … I guess you are out of luck :smiley: (is that a 3dfx voodoo 1 ???).
No it is not, it’s just that I was not fully awake when I tested it :stuck_out_tongue:

Is the following approach correct: define the quads, the textures on them, the diverse lights, their colours and their positions in the standard OpenGL way, and then implement a pixel shader (overload the standard method for fragment operations?) to perform per-pixel lighting computation?
You have to do more than just the lighting, ie. sample texture too.
Basicaly all fragment fixed path operations must be replaced by fragment shader (same with vertex).

does OpenGL provide access to values computed from the previous stage, such as light vectors, viewing direction vectors, light colours, and fragment colour
This is the way, but note that there is no such ‘fragment color’ as input of fragment shader (it is the output). You do have the interpolated vertex color though.

I am not really expert at this, read the tutorial at ligthouse3d, it is very detailed :
http://www.lighthouse3d.com/opengl/glsl/

If you need such things as a light vector or normal vector in your fragment shader, it’s best to use a vertex shader to compute such things, as this is not available in the standard OpenGL fixed function pipeline. Of course you can use a fragment shader only, but the inputs you’d have available then are very limited and only useful for complex texture combining, but not for per pixel lighting calculations.

Basically the vertex shader replaces the default implementation of transform and lighting, the fragment shader replaces the default per fragment operations like (multi)texturing. “Overload” is perhaps the wrong word. “Overwrite” is more accurate, because you really replace everything, not only alter it…

In the vertex shader you can output arbitrary values, and these values are interpolated across the polygon and available as input to the fragment shader.

The program itself won’t really change, except that you have to load the shader. You have access to the material properties and per-vertex attributes in your shader, so you can leave the part of the program that draws things the same and just “plug in” your shaders into the setup code.