phong shading

Alright, so I want to add phong shading to a GL-rendered scene or object. I suppose I"m just verifying my idea here. I should be able to do this quite simply by wrapping certain GL functions(glVertex*, glBegin, glEnd, glLight*, glNormal*, glColor), and based on this apply the shading model to many points on the polygon, using GL to draw points of the desired intensity. Clearly this will be slow and an approximation(as the points aren’t mapped to pixels), however, I should be able to get a good approximation this way, no?

What I am planning on doing in the end is implementing most of the graphics pipeline in software(at least for phong-shaded polygons), which, again, is going to be slow, but that’s okay. I’m just trying to learn about phong shading as well as what kind of calculations are involved in the graphics pipeline. In this case, I would clip, rasterize and shade the polygons myself, then pass points (in normalized device coordinates or window coordinates) to GL for rendering.

Does this sound feasible? What are peoples’ ideas, suggestions, etc???

yes, it will work: since you are drawing directly to the framebuffer with gl primitives, it will surely work.

i would do this way: rasterize the phong triangle into sysram and then blit it to the screen with glCopyPixels function.

this way you can do any kind of optimization you could need, since you’re writing to sysram.

if you’d like to have texturing with phong shading… well, that could be a problem.
probably, you’ll have to duplicate data, i think.

Dolo//\ightY

[This message has been edited by dmy (edited 04-24-2000).]

No, texturing isn’t something I have time to look into, at least not right now. I have a week to get this working

So what you’re saying I should do is allocate my own “framebuffer”, then use glcopypixels to copy it to the true framebuffer? Isn’t glcopypixels quite slow? Is it quicker than having the points I pass to gl pass through the entire pipeline again(however, no lighting calculations or polygon-rasterizing is needed)??

J

Yes, glCopyPixels is slow, but it’ll be faster than issuing a separate point primitive for every pixel - if I understand you correctly, that’s what you’re suggesting.

Yes, that is what I’m suggesting. I will use glCopyPixels instead(at the cost of a big chunk of memory).

Right now I’m drowning in projection transformations and trying to figure out how to do the z-buffering. Any suggestions? I know that the z-buffer holds some kind of normalized value for the distance. However, since I’m not using a hardware z-buffer, I can clear it to whatever I want, so I don’t have to use zero to represent infinity. IT doesn’t seem like a good idea to store the actual z-coordinate in the z-buffer. I suppose this all ties into what the perspective projection matricies in the back of the redbook do for you. I can’t quite figure out the perspective projection, but the orthographic gives you a z-coordinate between -1 and 1.

Another thing I’m confused about is whether I can just throw out/ignore the w-coordinate and use 3x3 matricies instead of 4x4 if I assume the w value is 1.0 for all points. Is that a safe thing to do? It seems it would be, if it’s the same for all points, but I may be missing something.

If anyone could shed some insight…?? I’d be greatly appreciative

J

what i know is that a 4x4 matrix can hold any kind of transformation: scaling, shearing, rotation, translation.

moreover, it is the most efficent way to do a important thing: coordinate frame transformation.

you can think of a matrix as the current state of a rigid body.

-you rotate
-you translate
-you rotate again

…and you get a logical way to handle apparently complex transformations.

again, the w coordinate is needed when you want a general solution for orthongonal and perspective projections.

yes, we could throw out the w coordinate, but we have not to care about: opengl does it as it needs! …well, it depends on the implementation, but what i’m sure about is that when you tell the FPU to multiply or divide a number by the unit, the cycles involved are much fewer, so we can think about it as a low level optimization.
at least, it worked this way with my older p133.

also, if you would use 3x3 matrices you need a separate 1x3 vector wich holds the translation part… this will lead to complications into the code, probably more interfaces, and if a matematician should see that… only god knows!

Dolo//\ightY

Originally posted by dmy:

yes, we could throw out the w coordinate, but we have not to care about: opengl does it as it needs! …well, it depends on the implementation, but what i’m sure about is that when you tell the FPU to multiply or divide a number by the unit, the cycles involved are much fewer, so we can think about it as a low level optimization.
at least, it worked this way with my older p133.

What is the “unit” you speak of? And, regardless of whether opengl does as it needs, I’m trying to figure out for myself(doing all transformations, projections, etc), so I’m trying to figure out how to best simplify things.

One thing I did figure out is the matricies in the back of the red book not only project the points, but do the persepctive division as well. So the resulting points are in normalized device coordinates.

J

with “unit” i mean floating point value 1.0

yes it it, the GL_PROJECTION matrix does exactly this.

if you don’t need a general solution, go for the 3x3 rotoscaling matrix and 1x3 translation vector, it will work good, but i think it will also complicate things.
i would use the same approach opengl uses for modelview transformations (you’ll have to write the code to do them) and maybe avoid the projection matrix and do the projection from world to screen in the simplest way:

screen.x=world.x/world.z
screen.y=world.y/world.z

Dolo//\ightY

[This message has been edited by dmy (edited 04-25-2000).]

why would you divide by world.z?? If anything, I believe you’d divide by (world.z/near) where near is the distance to the near clipping plane. If you do world.x/world.z and world.y/world.z, objects further away with the same x,y coordinates would be closer together, as they should be, however, they will be closer to the left as well, no?? I suppose dividing by (world.z / near) would do the same. Hrm…

it all depends on where is your coordinate frame origin: i supposed it to be located at the viewport center, and if you use glPerspective to do the rest of the work, you will probably put it there.

i’m telling world to screen coordinates, not world to device coordinates…

when you have screen space, you scale/bias x & y to map your viewport needs.

Dolo//\ightY

[This message has been edited by dmy (edited 04-27-2000).]

well… no. :wink: The near clipping plane represents the focal length of the camera (ie. the distance from the image plane to the focal point). If you think about the so-called “classic” 3x4 projection matrix:

1 0 0 0
0 1 0 0
0 0 f 0

then clearly v’=[X, Y, fZ], and thus x=X/fZ and y=Y/fZ

and since x/fZ != x/f/Z… ;>

excuse me, was not “world” coords: it was “eye”.

Dolo//\ightY