Phong shading

Is it possible today to do phong shading (blinn or phong lighting model) without big fuss and without massive use of texturing? I’ve not followed the latest hardware & driver trends…

-Lev

current trends:
programable shaders

=>

you have to code it for yourself

now…

todays hardware-states:
geforce2(mx etc) simple diffuse phongshading with normalized parameters (one with a cubemap, one with in-combiners-normalization) or unnormalized approx-phong (is phont the one with half_angle? if so, this one)… both in 1 pass

geforce3: more precious way, eighter with or withour normalizations. but not much lights per pass possible because of lack of ‘interpolators’

radeon8500: first time perpixel specular possible (with perpixel exponent), very precious values because of lookup-functions (calculated in float-precision) and higher precision combiners at all…
4 diffuse lights in one pass…

The key to this is the ability to perform vector dot product operations per pixel. There are various extensions for this DOT3 shading. The underlying idea is you perform dot products between fragment RGB color triplets as if they were signed vector components. When you can do this you are able to send in light vectors, surface normals and view vectors as various color components. Texturing the surface normal allows you to bump map a surface but that is pretty complex because you must transform the vectors to tangent space. Fortunately simple Phong shading doesn’t require this.

This is all fairly tricky to do correctly, it’s much more complex to implement than typical OpenGL lighting, the easiest way is to take an existing example and modify it. When you understand the concepts I outlined above, it becomes quite simple to follow what the example code is doing.

hmm, dorbie, you confused me a bit

To do phong shading we need:

-per-pixel normals. Since we only have per-vertex normals supplied I guess one have to interpolate those. How can this be done?

-evaluate the lighting equation on a per-pixel basis. ambient term should make no problems, but as you mentionen diffuse term involved a dot product between the light vector and the normal vector. The specular term is even more complicated(exponential) (esp. phong’s, but I could live with blinn’s). Now how is this step done? How do we feed the card with the input values and get the right output value?

-attenuation. Most examples I’ve looked at use texture for attenuation, but this make it impossible to change the attenuation on-the-fly without preloading all possible attenuation types and the precision is not very good(very high-res attenuation textures are needed for true per-pixel lighting) How about this? any new techniques on this field?

Thanks in advance,
-Lev

That’s unfortunate, my intent was to do the opposite of confuse you.

A color representing a vector, when interpolated across a surface supplies a normal at each pixel.

The other approach is to treat everything in tangent space and transform the other vectors through the ‘coordinate frame’ at each vertex in which case all normals point straight up, i.e. the normal is always 0, 0, 1 (In RGB terms .5, .5, 1). The advantage here is you can then supply a texture for the color of the normal and you have a bump map without having to transform every texture fragment. If you’re not bump mapping then at the very least you get to use a constant color as the normal vector.

Texture can be used for attenuation or you could use another independent color calculation, you can appreciate that the range attenuation for local lights is not as difficult as the rest and not required at all for distant lights.

Raising the specular dot product to the power of shininess is more complex and less portable. It can be done for example with dependent texture reads. I was just trying to give you a few pointers on the most important part; using dot product fragment calculations. For the gory details there are examples out there.

This is not something that can be effectively covered in a quick opengl.org post. Ultimately it’ll take some study, and there are a several approaches, of varying quality and portability. None of them are simple to implement and there is no truly standard way, different applications craft their lighting equations from the available OpenGL ‘box of tricks’ and often take a few shortcuts or skip a few details or selectively enable certain features depending on available hardware & extensions:

Look at these URLs, if this doesn’t clear up any confusion I don’t know what will:
http://www.ati.com/na/pages/resource_cen…SimpleDOT3.html
http://www.ati.com/na/pages/resource_cen…ne3SpecMap.html

P.S.

These are particularly effective examples IMHO because they illustrate the color components which are infact the signed vector representations prior to the scale & bias (0 -> 1) to (-1 -> +1) transformation.

The light blue color is the (0,0,1) vector stored as a (.5,.5,1) RGB triplet I mentioned earlier.

[This message has been edited by dorbie (edited 01-04-2002).]

oki… letz say we do it in tangentspace without bumpmapping.

the ambient is constant
the diffuse is simply clamp(normalized(light).z,[0,1])

to get the normalized light you should use a normalization-cubemap for now. possibly future hardware will support direct normalized interpolators, we’ll see (we get higher precision per pixel currently, next thing will be 64bit colors and then floatingpoints, yeah! and then the texture-lookup will be too unprecious )

oki till now:

1 cubemap for normalized(point_to_light);
no dp3

specular:

we have to have point_to_eye normalized and reflect it (**** on halfangle )

reflecting a vector works like this:

r = v - 2*(v.n)*n

now our n is simply (0,0,1), so this gets muc simpler:

r = v - 2*(v.z)*(0,0,1) or

r = v;
r.z = v.z - 2*v.z or -v.z

means we need point_to_eye reflected.
idea:

transform point_to_eye into tangentspace, then negate z and normalize

this is a second cubemap with reflected point_to_eye.

now you simply have to dp3 this with the point_to_light you already have and power this up…

on an ati you can power up per pixel with different values because you can after those calculations simply set up texture coords like this: r = (point_to_light dot refl_point_to_eye), s = exponent and have a 2d-texture wich then returns the value you want.

costs: 3 textures. 3 left on the radeon, one left on the gf3.

now distance-attentuation…

as you possibly know distance-attentuation depends on point_to_light, too… take away the radius of the light and you get d = point_to_light_div_radius, 0 if in light 1 if at the radius (the length of this vec d).

we can get its squared length very simple with a dp3, and then we can choose what we wanna do with this.

simple att:
1-d^2

nicer (smoother) att, near to exp(-d^2):
(1-d^2)^2

or a texture-lookup on the ati.

result:

4 textures used, 2 normalization-cubemaps, 1 specular-lookup, 1 pass_trhough of the texcoords

or 5 textures, if we have a attentuation-lookup, too…

the ati radeon 8500 provides 6 textures, one left for the colormap. if you don’t do a lookup for the att but use one of the direct implementable functions you have 2 left, means one colormap (with gloss in alpha fe) and one bumpmap (with transparency in alpha, fe)

on the gf3 you cant do the perpixel powering so you dont need the lookup, you simply have a predefined power of for example ^32 or something you create by

b = bb
b = b
b
b = bb
b = b
b

in the register combiners…

phongshading on a gf3:

tex0: [colormap,transparency] texture_2d
tex1: [point_to_light_normalized,0] cube
tex2: [refl_p_to_eye_nrmlized,0] cube
tex3: [pass_trough_p_to_light,0] pass_trough

and with this solve the equation

phongshading on a radeon8500:

tex0: [colormap,transparency] texture_2d
tex1: [point_to_light_normalized,0] cube
tex2: [refl_p_to_eye_nrmlized,0] cube
tex3: [pass_trough_p_to_light,0] pass_trough
tex4: [lookup_r^s,0] dependend texture read (second lookup)
tex5: [lookup_dst_att,0] dependend texture read (second lookup)

thats the phong you can get today…

hm, tangent space lignting isnt very attractive to me because I’d like to use phong shading with or without textures (also without having texture coordinates) and without texture coordinates I’ll have difficulties getting the tangent and the binormal, if I recall correctly.

I have a short question:

So one gets per-pixel normals by assigning them to colors and then using the interpolated value with a cubemap to get the normalized vector. Correct?

How do I get per-pixel light vector?

Cheers,
-Lev

P.S. What do you think about NVIDIA’s effort to jump to OpenGL 1.4? I think it would be sad to have 1.4 without vertex programmability which works wth NVIDIA and ATI cards.

Hi.

Lev, to do phong shading, you must interpolate several vertices.

imagine a line of a poly

\L1 |N1 N2\ |L2
\ | \ |
\ | |
-------------------------

now you must set it up to interpolate the light vector from L1 to L2, and the normal from N1 to N2, and normalize each normal for every pixel(by cubemaps), and use it to calculate the light…
this is the way phong shading works, there is no normal-map involved.

the demos with normal maps work by only interpolating L1-L2, and taking the normal from the texturemap instead.

Now, i would think this would be doable by using secondary color, but that would limit you to do only either diffuse/specular on a perpixel basis…

unless you can make tell the card to interpolate other values than the two colors?

or maybe the

continued:

Or maybe the pixel shaders / texture shaders will allow you to do such a thing?

Jonas

you can use other values to interpolate:
TEXTURE COORDINATES

and thats how cubemaps work for normalizing… you put in the unnormalized vectors as texture-coordinates for the cubemap. in a cubemap, only the direction counts, and in the specific direction of the texcoords a compressed normalized version of this vector is stored in rgb-colors

thats how we can interpolate correctly perpixel normalized vectors.

dont use primary and secondary color because if you first normalize and then interpolate the interpolated vectors point into a different direction => lighting suxx (except you have reaaaallly small triangles )

if you dont use textures, you can light in objectspace and so you dont need tangentspace… its only for bumpmapping. if you use tangentspace without bumpmapping you get some simplicities because the normal is always z=1 and like that you can save some dotproducts… that whas the idea… but if you dont use textures at all you dont need tangentspace and therefore you can do it in objectspace as well…

The only problem with using a cube map normalizer, is that it’s only 24bit (8 bits per channel - or axis), so there’s some loss there (which is noticable when doing specular lighting).

dont use primary and secondary color because if you first normalize and then interpolate the interpolated vectors point into a different direction => lighting suxx (except you have reaaaallly small triangles )

It looks 100% correct if your surface is flat. And it doesn’t look particularly bad if your surface is not flat. It depends on whether or not you have the texture unit to spare, and the fillrate that the texture access will take.

The only problem with using a cube map normalizer, is that it’s only 24bit (8 bits per channel - or axis), so there’s some loss there (which is noticable when doing specular lighting

Well, that’s the best you could hope for with any nVidia card. They don’t have more than 8 bits of precision in the register combiners (according to the RC spec) anyway. ATI’s fragment shaders have greater precision, as they have to be able to work with texture coordinates as well as colors.

Originally posted by Korval:
Well, that’s the best you could hope for with any nVidia card. They don’t have more than 8 bits of precision in the register combiners (according to the RC spec) anyway. ATI’s fragment shaders have greater precision, as they have to be able to work with texture coordinates as well as colors.

Yes, you can use the texture coords, but then the model would need to be fairly high tesselatsion inorder to get correct results.

Keep in mind that texture shaders work on full (32-bit) floating-point precision. 16-bit HILO textures can get you pretty good precision.

Also, register combiners have 9 bits of precision, not 8, because their range is [-1,1].

  • Matt