Specular Lighting with Pre-transformed Data

I am adding OpenGL acceleration to a mature software rendering platform. The easiest way to do this was to use our existing transform, clipping, and sorting, and just pass 2D primitives to the OpenGL pipeline. I set up the viewport to do orthographic projection. I’m passing projected xyz coordinates, but I’m passing world coordinate lighting and normal information (we only use directional lights, so this works fine).

Everything works great, except that specular lighting is wrong, because the GL pipeline has no idea where my eyepoint is, so it cannot correctly compute the Blinn halfway vectors.

Can anyone here suggest a work-around to get specular highlights into the right places without have to incur the trouble of setting up OpenGL to do the perspective projection identically to our existing software?

Also, our software rendering engine uses a real Phong shader for lighting interpolation, and everything I’ve found about OpenGL suggests that it only supports Gouraud interpolation. Is there any extention that asks the hardware to better interpolate normals?

Thanks in advance for any clues!

-Joshua

For the specular lighting, I am pretty sure that OpenGL takes the perspective matrix when the lighting values are set so what you could (possibly) do is set a perspective matrix, setup the lighting, set your otho matrix and then render. (You may want to look this up in the red book under lighting)

As for interpolation, OpenGL calculates the specular per-vertex and just interpolates the color. (also look-up for seperate spacular)
If you want better than this, you will have to use a shader (either ARB VP/FP or GLSL)

yuck, what a mess. I think your division of labor is in the wrong place.

However there may be a solution.

Just pass through all vertex data in a vertex shader. Perform the Phong shading in the fragment shader using supplied vectors (attributes -> varyings).

You’ll still need to do something about your view vector, your software must know what this is per vertex after transformations. Pass this in as a per vertex attribute so that it reaches your fragment shader and it should be workable.

Ideally you should work to at least get your projection transformation done in hardware, then you’d have an implicit view point at 0,0,0 in eyespace for OpenGL to use, either in the fixed function light model or in a shader.

OK, I solved that problem. I’ll spare you the details because it will totally turn your stomach, but suffice it to say that I was able to do it by tricking out the projection matrix.

For now I’ve rigged automatic geometry subdivision to compensate for the lack of GL_PHONG (vs GL_SMOOTH). Not a great solution, but certainly better than having to use pixel shaders!

On to my next question: perspective correct texturing! I’ll start a new post with that one…