about lighting calculation and back-face culling

hi, everyone. I’m implementing my own software renderer. Maybe I’m missing something, but i found a problem : OpenGl supports two sided materials, so before calculating the light for each vertex, it has to decide whether the polygon is facing front or back, then it can choose the right material.

I have read some books, they said that the light calculating is done in world coordinates and back-face culling is done in viewing coordinates. Well, in opengl, i think these two steps can be moved to a same stage, after modelview transforming. This can solve my problem because face can be determined and culled before light calculating. But determining polygon’s facing requires more calculation in viewing coordinates, because the sight vector isn’t constant for each vertex.

Then I read the opengl specification. It says that facing dertermination is done in window coordinates(after perspective transform and viewport transform?), so the sight vector is constant for any vertex(0,0,-1),and calculating the z component is enough to determine the facing. But here is another problem. Light calculation is done in world/viewing coordinates, and how does it know the polygon facing while the facing determination is in the later stage?

Sorry for my bad English, hope it’s clear enough. Thanks!

Lighting is (typically) based on vertex or fragment normals. With shader programs you can do lighting any way you want.

Back face culling is based on polygon winding order: The vertices are given in a specific order (v0,v1,v2). Depending on which order the vertices appear in screen space - clockwise or counter-clockwise) the visible side of the polygon can be determined. So back face culling only needs vertex positions, it has nothing to do with normals.

Edit: See OpenGL 4.1 Core Profile Specification 3.6.1 on page 168.

Thanks for the reply. but sorry that i haven’t made my question clear enough.

My question is: if lighting is done in viewing coordinates and back-face culling(which determines the polygon’s facing) is done in screen coordinates, how is it possible to do the lighting because it doesn’t know the polygon’s facing, thus it doesn’t know which material(front/back) is to be used?

Or if lighting isn’t done in viewing coordinates in OpenGL? How can i do perspective tranformation with the normal vectors?

sorry i made a stupid question. I found the solution in opengl 1.1 spec(because I’m implementing a fixed function pipeline). It tells me that if two-sided lighting is enabled, opengl will compute “two” colors, and I can choose the right color while doing the back-face culling.