PDA

View Full Version : new shading mode



MrMADdood
08-15-2000, 01:20 AM
well... goureaud is great and all... but i think current processors are ready for something new...

goureaud is a linear interpolation, therefore artifacts are induced in several zones... but what if we used a non-linear interpolation? we just need a few more normals... and.... um.... help here http://www.opengl.org/discussion_boards/ubb/smile.gif

john
08-15-2000, 09:21 PM
you could, or you could just bump up the resolution of your mesh. Think about it. If you're not happy with the error of the surface function by aproximating it with verticies spaced 1 unit apart, aproximate it with verticies space 0.25 units apart. The error is less, and all without making a new shader model.

cheers
John

MrMADdood
08-15-2000, 10:26 PM
well... that is exactly what we want to avoid
as far as i know, the light fading model uses a curve of 1/r^2 , which isn't THAT complex... if we aproximate, we can use just 1 quad for that big floor scene, and use the extra bandwidth somewhere else
plus, goureaud is a linear interpolation, therefore uses 2 intensities in each calculation. if we use 3 intensities we get a cuadratic interpolation, etc.
i tried to figure it out myself, but got caugt in some redundancy... the fragment's brightest point may be needed... dunno
also, there might be some precision issues with really huge fragments.

john
08-15-2000, 11:14 PM
Yer... it's all a trade off. Do you want to spend money on silicon for the more complex shader model, or money on improving the bandwidth between the graphics card and processor memory? For my money, I'd vote for the bandwidth: everything gets to use it, regardless if it uses the funky new shade model or not.

Bob
08-16-2000, 07:13 AM
Approximate the floor with one huge quad, huh?

Well, I can't see how this would work, even with the kind of interpolation you suggested. Say you got a room, and a light in the middle of the floor (well, abit above the floor that is http://www.opengl.org/discussion_boards/ubb/tongue.gif ). If it's placed in the middle, then the intensity in each corner will be the same, and the floor will be flatshaded if you use gouraud. But isn't this going to happen even if you use some kind of quadratic interpolation too? Doesn't it have to know that the intensity in the middle is stronger (whish is easily solved by boosting the resolution of the floor to several quads, and you get more points where you can calculate the intensity)?

MrMADdood
08-16-2000, 07:39 PM
Quadratic interpolations need 3 values instead of one. that means you would prolly have 1 extra intensity in the middle between each pair of vertexes. That should pick up the light in the middle and generate a nice light curve with the even light in the extremes.

[This message has been edited by MrMADdood (edited 08-16-2000).]

Humus
08-17-2000, 08:00 AM
Hey guys!
What you all need is perpixel lighting or lightmapping. However, in vertex shading mode I think that we should replace goraud shading with something similar which is perspective correct (I think that's called phong shading ...).
I surely agree with MrMADdood that we shouldn't need to more geometry than needed to get it realistic, a cude should be 12 triangles and not tesselated into 1024 triangles to avoid artefacts. I vote in favor for silicon over more bandwidth.

Bob
08-17-2000, 10:09 AM
Hey, I vote for more bandwidth, and if I want phong shading, I use WIN_phong_shading (http://oss.sgi.com/projects/ogl-sample/registry/WIN/phong_shading.txt) ... And if it's not available, I use the higher bandwidth to pass more faces http://www.opengl.org/discussion_boards/ubb/tongue.gif

Humus
08-17-2000, 02:44 PM
Well ... higher bandwidth is good too. But Goroud sux, it'll always produce artefacts, they just gets smaller with higher tesselation. It's OK to use a lot of triangles to make a sphere, but needing to use a lot of triangles to get a triangle really stungs i my programmer heart ...

john
08-17-2000, 03:47 PM
why does it sting your programmer's heart? it is no different from approximating the intergal of a function by simpon's rule, for example, and THAT'S a computer/maths science thing.

Dividing something into smaller versions of the same thing to minimise the aproximation error across the bounds of it is not a hack; it is done time, and time again. Terrain height fields work on the same principle, for example.

cheers,
John

MrMADdood
08-17-2000, 09:06 PM
cool!!! i didnt know there was Phong Shading avaiable!! (but i guess its slow as hell)

i gotta try it, anyway. (is it an extension? doesnt look like one)

about tesselation to improve detail... yes, that is the current method... but cmon, we're trying to improve things here, right?

well... ill dig in the archives http://www.opengl.org/discussion_boards/ubb/smile.gif to find fast phongs algorythms... if i find anything worthy, will post again
<EOF>

Bob
08-18-2000, 02:36 AM
Keep in mind, that if you implement a "better" shading model, because you want to reduce the resoluion (OK OK, maybe not reduce, but prevent higher resolution), another problem might occur. The actual model itself might look better with this new shading model, but the sillhuet might look bad. A rounded object (sphere for example) can look great compred to a gouraud shaded object. But sillhuet might look like a icosahedron or any other platonic solid. But, if you increase the resolution, and use gouraud, you can achieve the same effect, but without this nasty look_like_a_platonic-look.

Yes, we are trying to improve things here, but we better improve thing we REALLY need http://www.opengl.org/discussion_boards/ubb/tongue.gif

And yeah, it's an extension...

[This message has been edited by Bob (edited 08-18-2000).]

Humus
08-18-2000, 05:54 AM
Well ... i know what you all mean and you're not "wrong" in any way ... but you're missing the point. I'm not saying that we should approximate spheres with less triangles, but what I try to explain is that we shouldn't need to use complex geometry for objects that doesn't have complex geometry to get an appropriate shading.
An example:
Say I want to draw a simple room. That 6 quads or 12 triangles. The geomtry isn't more complex than that and thus I shoudn't need to use more geometry than that. But with gouraud it'll look bad as I move around in it. Splitting every wall into 32 triangles might solve the problem, BUT, it also makes things slower especially at cards with no T&L. Using phong shading this problem wont occure and it's not more complex than texturing and could be used with no performance penalty at hardware supporting it. It doesn't need higher memory bandwidth either.

Q: Why does almost all games use lightmapping?
A: Because vertex shading sux.

I tried vertex shading previously, but was never happy with it, now I use lightmapping and it works a lot better. But if the hardware took over this work of doing the lighting correctly it would be a lot better.

john
08-20-2000, 04:09 AM
But your argument is almost self defeating: "phong shading won't be much slower on hardware that supports it", isn't much of an argument against "high resolution meshes _isn't_ much slower on the T&L h/w that supports it". What you're advocating is more complicated shader models (more transistors, more silicon, yadda yadda) just so you don't have to divide your mesh, and thus using silicon that _already exists_.

john
08-20-2000, 04:12 AM
but i should clarify: new shader models are certainly a proverbial Good Thing. Don't get me wrong on that. But, I argue, one shouldn't be adding new features just because people are to slack to cleverly use _existing_ features.

Humus
08-20-2000, 05:16 AM
You're still missing the point!
Ok, I can use existing features to get around the problem, BUT, i can implement new feature which SOLVES the problem, which is better. And this "more transistors, more silicon" etc stuff ... well, today we add one, we add two, we add three texturing units ... we add one, we add two, we add four pixel pipeline ... we add, add ...
A texturing unit is more complex than a phong shading unit!! Why can't we add a phing shading unit??
Phong shading is superior to gourad shading in all aspects, it does not need more memory bandwidth and may if smart implemented not be slower!

>But, I argue, one shouldn't be adding new features just because people are to slack to cleverly use _existing_ features.

Isn't this holding back productivity of the developer? I have to do a lot more work to get the job done, and the GPU has to do more work too!

Gourad shading was good back then when we has software renderer, since it's faster for software to handle ... but today it's an obsolete technology.

john
08-20-2000, 04:03 PM
I agree with you that phong is better than gourad, but this is a debate and it _doesn't_ have a correct answer.

You have to appreciate that introducing more complicated hardware will have trade features for "something else". What that something else is up the the hardware designer's.

This is a crude argument, but think about the philosophy behind a lot of the RISC chips: they're designed to be lean and mean, and thus run at a faster clock rate. True, Intel and AMD have been able to crank up the clock rate of their complicated processors, but at the expense of adding a tonne of remarkably clever hardware tricks. Alpha, on the other hand, have long had (or, USED to have) a very lean chip that ran at blazing clock speeds. Which is better? A more complicated chip that either runs at a slower clock speed, or a more complicated chip that exhausts much of its transistor budget trying to hide its complexity (but thus becomes more expensvie to build), or a simpler chip that runs at blazing clock rates and is much cheaper, thus opening the possibility of having MANY simpler chips? It's all a trade...

The VAX architecture used to add features at the drop of a hat, including (if I recall correctly) a single instruction to calculate the CRC of a word. That follows the philosphy of adding more complicated hardware to solve problems that could be solved by a collection of simpler hardware. The VAX used to have a tonne of instructiosn taht were too specialised and were hardly used.

True, phong shading might not be considered to be "too specialised", but its expense MIGHT be better used elsewhere. Chips typically have transistor budgets, and the engineers have to work out how best to utilise it. Do you spend a considerable part of your transistor count on a new shade model in hardware, or do you use it to add more texturing units? For a application that wants a better shade model, but also requires several texture layers, more texture units might be better. Sure, it'll need a higher resolution mesh than what you're suggusting, but on the other hand it might be able to draw the scene in only one pass. Which is the better trade? Higher resolution mesh, or multiple passes? But the budget isn't just limited to transistors: the cards also have to be sold. If you have a cheaper chip that doesn't have phong shading, but a kick-ass polygon count, then you might be able to have more than one chip on a card. Alternatively, you might be able to sell consumer level cards with more video memory thus limiting the amount of texture thrashing in a game.

My argument is: sure, phong shading is great, and it's a fine argument to say we should all be striding boldly forward to new technlogical heights. But, I argue, needlessly adding features is NOT THE BEST APPROACH. The shader model CAN be improved by increasing the resolution of the mesh. Indeed, as Bob points out, this is better for the object's silouhette. If phong shading _isn't_ implemented in hardware, than this allows scope as a given price point to add other features like more graphics pipes and video memory. Which is better? Just because phong shading is a technologically superior shade model to gourad, doesn't NECESSARILY mean it should be implemented, and just because it explicitly SOLVES the problem doesn't make it necessarily a better solution, either.

cheers,
John


[This message has been edited by john (edited 08-20-2000).]

MrMADdood
08-20-2000, 09:09 PM
after reading everything in john's (huge http://www.opengl.org/discussion_boards/ubb/smile.gif ) post, i noticed you are seriously considering FULL phong shading. That wont do, since full Phong is far too complicated.
I tried to figure a way to approximate Phong, then i thought "what the hell am i doing", i just headed for www.google.com (http://www.google.com) and searched for Fast Phong Shading.
I highly reccomend you read it before posting a reply. Phong can be substancially simplified without losing the looks.

check this siggraph paper: http://www.cs.unc.edu/~gb/siggraph86.html
fast phong takes only twice as much CPU as goureaud....
think about it...

Humus
08-21-2000, 01:59 AM
John, you're right in many ways, and I know that using more hardware means higher price, lower yields and more heat. But, as said before, a phong shading unit doesn't need to be very complex. Going from 20 million transistors to 20.2 million transistor for the phong shading isn't a high price for that kind of valuable feature.
And phong shading has a lot in common with dot3 bumpmapping (it's like bumpmapping without bumps ...) already implemented in GeForce/GTS/Radeon etc, you could probably use the same hardware for both with only minor changes in the hardware.
After reading a little at the link provided by MrMADdood I'm quite confident that phong shading could be easily implemented in hardware with at a very small cost.

Furthermore, using phong shading might in many cases reduce the need for more texture layers. With phong shading we can drop lightmapping, which saves both fillrate, bus bandwidth and CPU time (in the case of dynamic lightmapping).

And, as said before, I don't argue for reducing the polygon count on shapes which are rounded, but I just don't feel good about drawing cubes with a lot of triangles since a cube is only 6 quads in the real world.

skippyj777
08-21-2000, 04:45 AM
I don't really feel good about having more than two triangles for a simple square either. Wouldn't having more triangles complicate and slow down collision detection?

Humus
08-21-2000, 02:06 PM
Well, that depends on how it's implemented. You can have have one set of geometry for drawing and one set for collision detection ... but that'll of course take up more memory space.

MrMADdood
08-21-2000, 03:53 PM
humus is right, phong could be implemented under DirectX 8 using a custom shader (but then again, we dont care)

Under OpenGL, if the card supports dot3, phong should be almost the same thing.
but dot3 is awfully slow (looks great, tho)
and dot3 can be aproximated too, if you consider the light source to be at an infinite distance.... anyway... i think a phong aproximation is better.

foobar
08-21-2000, 06:18 PM
Since when does adding a feature to openGL mean we have to add more silicon to our graphics cards? Phong shading doesn't have to be supported in hardware: having it supported in openGL means that those who want to use it don't have to write their own proprietary routine using some irritating extension like NV_sub_transistor_combiners! They can just trust that if their graphics card supports it they will have an efficient implementation.

PS RISC evangelists should note that openGL is not a RISC machine! If you want your openGL accelerator to be a RISC machine then that is a very different issue.

PPS cards with dot3 bump mapping are nearly there anyway. All we need is procedural bump maps (in the case of Phong shading the procedure would be interpolation of vertex normals). These would have many more benefits other than phong shading. Anyway the point being that the hardware is not as far from supporting Phong shading as openGL 1.3 is.

john
08-21-2000, 08:51 PM
OpenGL is not a machine, let alone a RISC machine. It was an analogy; nothing more.
Oh, and another thing: OpenGL is an abstraction of the *graphics hardware*.

[This message has been edited by john (edited 08-21-2000).]

Bob
08-22-2000, 12:01 AM
MrMADdood: >>Under OpenGL, if the card supports dot3, phong should be almost the same thing. but dot3 is awfully slow (looks great, tho) and dot3 can be aproximated too, if you consider the light source to be at an infinite distance

So, we can approxiate phong with dot3, and consider the light at infinity?

Oh boy, a light at infinity will flatshade a cube's face. And this was exactly the thing you DIDN'T wanted... or? http://www.opengl.org/discussion_boards/ubb/tongue.gif

Marc
08-22-2000, 04:42 AM
If you have phong shading you can throw away almost all lightmaps, so you don't need so much multitexture-units (gives you some transistors). Another solution would be, that you don't interpolate the normals of a triangle for every pixel but for only some points in the triangle and do Gouraud between them (hardware tessalator?). If you are able to control, how many points should be 'phong'-interpolated, you can do a trade off between speed and accuracy.


[This message has been edited by Marc (edited 08-22-2000).]

newt
08-23-2000, 10:09 AM
It may interest you to know that the SGI V6 and V8 Vpro graphics sets on Octane2 implement SGIX_fragment_light. Sort of phong but not quite.

So someone's doing something about it.

MrMADdood
08-24-2000, 08:26 PM
Originally posted by Bob:
So, we can approxiate phong with dot3, and consider the light at infinity?

-thats silly... its not what i meant
i meant dot3 looks good that way...alone


Originally posted by Bob:
Oh boy, a light at infinity will flatshade a cube's face. And this was exactly the thing you DIDN'T wanted... or? http://www.opengl.org/discussion_boards/ubb/tongue.gif

depends.... if you are using phong to show the light fadeoff in a fragment, yes, it would look flatshaded. if you use it to make it look like the fragment is bent, then the answer is no. the fact that the light is considered to be at infinity doesnt take away its direction. (example: sunlight on a sphere)

foobar
08-30-2000, 10:07 AM
Originally posted by john:
OpenGL is not a machine, let alone a RISC machine. It was an analogy; nothing more.
Oh, and another thing: OpenGL is an abstraction of the *graphics hardware*.



Opengl IS a STATE machine. That state machine has instructions which change the state and have to be considered as CISC instructions. If you thought your analogy was irrelevant why did you use it? http://www.opengl.org/discussion_boards/ubb/smile.gif

And yes openGL is an abstraction of graphics hardware and phong shading 'should' be inplemented in hardware but for many years we didnt have hardware transformation on a lot of cards: we can't accelerate the whole API right here right now but if we ever want to have phong shading in hardware you have to include it in the API. Is phong shading really such a level feature? It certainly can't be done within the framework we have now so the only conclusion is that it is a missing. If you don't want phong shading in the API there really is no argument for adding any other features to openGL at all since there aren't many that are lower level than this.

john
08-30-2000, 04:01 PM
OpenGL is a state machine, yes, but not a cpu. So drawing parallels between instruction sets is meaningless, becayse OpenGL does _not decode instructions_. CISC and RISC refer to the ISA. OpenGL does not HAVE an ISA, so how can you say whether it is a CISC (ISA) or a RISC (ISA)?? You can't. A FSM != CPU.

My analogy was in the same vein as RISC v CISC. More complicated chips are difficult to make faster.

My argument wasn't about NOT adding Phong shading. My argument was: it can be synthesised anyway by using a higher reoslution mesh, therby making the linear interpolation haev a smaller error term along a scan line. Why add THIS particular feature, over all others, just because people can't be bothered to refine a mesh? There are OTHER shader models out there, like Torrens-Sparrow, for example. Phong isn't magical. Why leap onto the Phong bandwagon---which you can emulate, anyway, with existing hardware---when there are alternatives? That is my argument. Not that phong is bad and shouldn't be implemented, but just to keep it in perspective.

Another thing: drawing analogies with gourad shading and phong shading on cpu's doesn't necessarily correlate to comparing the same analogy on a graphics chip (which that SIGGRAPH paper was about). Graphics chips have less resources & in different places than a CPU.

Don't get me wrong. Phong is good. But just make sure it's implemented for the right reasons.

cheers,
John

[This message has been edited by john (edited 08-30-2000).]

Humus
08-31-2000, 03:26 PM
Why leap onto the Phong bandwagon---which you can emulate, anyway, with existing hardware---when there are alternatives?


Why not implement it when current hardware is almost doing it in the case of DOT3?

foobar
08-31-2000, 06:59 PM
Suppose we have so many triangles in a scene that they are all 1-pixel in size. Then we are processing three normals for each pixel - if we had phong shading we would use less triangles and get the same effect of a normal per pixel.

Phong shading fundamentally adds per-pixel normals. It is a method for interpolating shading not an illumination model. The Phong illumination model is a different thing. All more complex illumination models could be implemented with gourard shading if you really wanted but the illumination model can only be calculated where the normals are! The question is where do we stick our normals?

an FSM != cpu but a cpu == FSM http://www.opengl.org/discussion_boards/ubb/smile.gif

[This message has been edited by foobar (edited 08-31-2000).]

Bob
09-01-2000, 12:18 AM
If all polygons are one pixel in size, we will get the same effect as phongshading, even if we are using facenormals and flatshading. http://www.opengl.org/discussion_boards/ubb/rolleyes.gif

inet
09-01-2000, 01:57 AM
The huge mount of pixel triangles will
kill you. It's not an efficient way.

I think the right method is do the light
computation at the fragment stage. Just
like the pixel shader in DX8. Though now it can't do a complex illumination computation,
but the future version will be able to do. This programmmability will give us a great flexibility.

Ludo
09-06-2000, 12:34 AM
It seems that everybody want more CONSTANTs for the glShadeModel() function... After, if it's implemented in h/w or not, that is the vendor's choice

MrMADdood
09-24-2000, 02:17 AM
oh this brought a tear to my eyes....

there is light at the end of the tunnel.... and it takes 6 texturing passes

put some ice on your texturing units and go check out: http://www.nvidia.com/Marketing/Developer/DevRel.nsf/pages/A29998E29896BE8E8825695C 0004D163 (http://www.nvidia.com/Marketing/Developer/DevRel.nsf/pages/A29998E29896BE8E8825695C0004D163)

be sure to press F6 to check out the number of triangles in the scene.
seems dot-3 bumpmapping interpolates the light normal just like phong shading...

[This message has been edited by MrMADdood (edited 09-24-2000).]

LordKronos
09-25-2000, 09:52 AM
I guess I should reply, seeing that the above link points to the per pixel lighting demo I wrote:

My take on this is to NOT add phong shading. Instead we need more texture units and more flexible per-pixel computations. My demo does very realistic lighting in 6 passes. This lighting includes diffuse and specular lighting/bumpmapping, gloss mapping, and distance attenuation.

By upping the texture units to 4, this lighting model (plus 1 or 2 additional effects tossed in) could be done in 2 passes.
The benefit of implementing several texture units and flexible per-pixel computations is that the hardware that enables me to create this lighting model can be used to create something completely different for someone else (like maybe quickly generating dynamic textures). If this lighing model were implemented directly in hardware, thats about all you could use it for.

NVIDIA is on the right track with their design. Per pixel lighting is definitely the way to go. Given enough general hardware, you can implement just about any complex lighting model your heart desires while leaving the door wide open for using the hardware in other creative ways.

MrMaDdood said:
seems dot-3 bumpmapping interpolates the light normal just like phong shading

Well, the vector interpolation isnt done through any special dot-3 feature. Its done using a generic feature: a texture unit with a cubemap texture.

The other key to this is the flexible register combiner sceme that allows you do perform arbitrary calculations per vertex. Some of the stuff I did probably couldnt be done purely with dot-3 hardware. But by making that hardware that does the dot-3 calculations flexible (which is kinda what the register combiners do, and then some) almost the same silicon can have so many more possibilities.

[This message has been edited by LordKronos (edited 09-25-2000).]

MrMADdood
09-25-2000, 07:32 PM
MrMADdood bows to LordKronos

"yes sire, thou art thee man"
http://www.opengl.org/discussion_boards/ubb/smile.gif

foobar
09-26-2000, 09:43 AM
I am guessing that adding more texturing units is more expensive than adding phong shading - yes they are more flexible but as soon as you decide to do complex per-pixel lighting you loose all your texture units!!

If phong shading was added and then we had something like register-combiners on the lighting pipeline you could save all your texturing units for other effects. And since video-memory bandwidth is currently the performance limiting factor adding more texturing units may not be the answer: eventually you have to stop reading/writing to/from memory and instead do calculations on the GPU to go faster: phong shading is the perfect place to do this and will lead to dramatic increases in image quality without wasting texturing bandwith.

LordKronos
09-26-2000, 02:34 PM
OK, this is a bit long, so bear with me...


Originally posted by foobar:
I am guessing that adding more texturing units is more expensive than adding phong shading - yes they are more flexible but as soon as you decide to do complex per-pixel lighting you loose all your texture units!!


The problem is, what will phong shading get you? Smooth interpolation of the surface normal. If you want realistic rendering, you are usually going to want textured surfaces. Bump mapping is going to make surfaces look a lot better then simple phong lighting. In order to do bump mapping, you need to use a texture. From here, its only a small leap to doing the whole equation using texture units. Now, if the hardware SPECIFICALLY does phong shading, how are you going to integrate that into your bump mapping? The answer is that you cant (not if you want something realistic looking) because the result of the phong lighting doesnt take into account the local (pixel-level) surface irregularities. So if you want to do bump mapping, you are going to ignore the result of the phong shading unit, which equates to wasted silicon.

Another thing to consider is that phong shading would be quite expensive, requiring 2 square roots per pixel. Not sure if you know or not, but a square root is quite expensive. Doing it via texture units requires no square roots, making it more viable for inexpensive and high performance graphics cards. If it were feasible to implement a per-pixel square root, I would rather have that exposed bare through the register combiners. I can tell you from my work, I've never needed a phong lighting unit but I could have used a square root unit once or twice (and for things that have nothing to do with phong anything). That was my reason for calling for more texture units and more flexible per-pixel calculations. If that hardware was built to do phong lighting, thats all you could do (and would you even do that, or would you rather bump map), but if the same functionality was provided raw, you could use it in so many ways.



And since video-memory bandwidth is currently the performance limiting factor adding more texturing units may not be the answer: eventually you have to stop reading/writing to/from memory and instead do calculations on the GPU to go faster: phong shading is the perfect place to do this and will lead to dramatic increases in image quality without wasting texturing bandwith.

But with more texture units, you can collapse thing into far fewer passes. Right now, cards are severely bandwidth limited. What we need to do is minimize that bandwidth. There are many ways to do so, and more texture units is one of those ways. (Yes, I know the intuitive thing is to think the opposite, but...)

In my demo, I required 6 passes (5 of which were dual textured) to do my lighting. That means it took 11 texture accesses, 6 z buffer reads, 1 depth buffer write, 6 color buffer reads and 6 color buffer writes. With 32 bit color and depth buffers, thats 19*4 bytes + 11 texture accesses per pixel. Using the simplest texturing scheme (nearest texel, no mip mapping) that would be 11*4 bytes for the textures. A total of 76 + 44 = 120 bytes of bandwidth per pixel.

Now to contrast, lets assume we have a card with 4 texture units. The same lighting would be able to be done in 1 quad textured pass + 1 single textured pass. Thats 5 texture accesses, 2 depth reads, 1 depth write, 2 color buffer reads, and 2 color buffer writes. Thats 7*4 bytes + 5 texture accesses per pixel. Again, assuming nearest texel, no mip-mapped texturing, thats 28 + 20 = 48 bytes of bandwidth per pixel. Only 40% as much bandwidth. How can that be?

More texture units requrire LESS bandwidth? The reason, as described above, is that there is a LOT of overhead in each pass.
Also, comparing less textures/more passes to more textures/fewer passes I can say this. With more texture units, in the BEST case, the texture bandwidth is the same as with fewer units. If it take 5 textures either way, you have the same 5 texture accesses spread out over a different number of passes. However, that is in the BEST case. In my example, I needed 11 textures in dual texture hardware vs. 5 textures in quad texture hardware. The reason for this is that, in the process of performing diffuse and specular lighting, I had to caculate attenuations in the first pass, multiply that by the diffuse blinn lighting calculation in the second pass, multiply that by the texture color and the spotlighy filter in the third pass. Then in the fourth pass, I had to calculate attenuation AGAIN. In the fifth pass, I had to calculate the specular blinn lighting equation (which required me to access the bump map AGAIN). Then in the sixth pass, I had to multiply by the diffuse texture AGAIN and the point light filter AGAIN. In all, I had to access the some of the textures in multiple passes, causing redundant bandwidth usage.

This was a bit long, but hopefully it helps you to see the notion that while more texture units looks like it requires more bandwidth, in practical implementation it requires less. I also hope you can see that dedicated phong hardware would be a waste when that same hardware could be added to general purpose per pixel calculations.

foobar
09-26-2000, 06:05 PM
Firstly, real bump mapping does not replace the surface normal it perturbs the surface normal so would be even better with phong shading. Especially given the fact that bump mapping breaks down at polygon edge boudaries: with phong shading you will get much better bump mapping with lower resolution meshes.

Secondly there are methods to reduce the cost of normalising the interpolated normals with square roots. And which is more costly a square-root unit or a texture unit with cache?

And, finally, I was not comparing the bandwidth of using more/less texturing units I was comparing the bandwidth of using more/no texture units: the whole point which you gladly missed is that using all those texturing units for per-pixel lighting wastes bandwidth that can be used for other effects. In any case your argument does not hold up since the texture units will have to reload for every triangle and as I proposed in a previous post: what if the traingles are sub-pixel in size? So we need larger caches: a sign of bandwidth problems. Real-time calculation ALWAYS saves bandwidth that is the reason it exists.

foobar
09-26-2000, 06:24 PM
I can't help but feel depressed about my life, seeing that I have dragged this debate into its second page http://www.opengl.org/discussion_boards/ubb/frown.gif

grady
09-26-2000, 08:03 PM
2 questions-

i don't want to break the technical role this thread has got on, but i'm new to programming graphics and i was wondering what would happen if you HAD to have a high resolution mesh for an object (ex. a huge jello jiggler). Would the high resolution mesh make the phong shading go even slower, of course it will be slower because theres more complexity in general (i guess) but does it get exponetially slower?

Its fine i guess in lordkrohnos's demo where you have tetrahedrons and a big square but what if you want to make a big intricate dragon or animate a satelite dish? can you phong shade the flat surfaces made of 2 triangles and gourad shade the fine meshes in the same scene (and even after that still have it look consistent)? surely not huh? thanks.


[This message has been edited by grady (edited 09-26-2000).]

dorbie
09-27-2000, 12:41 AM
If you are familiar with the fragment lighting extensions you will see that the ability to apply a texture to the various lighting terms was supported. This includes a surface normal texture defined in tangent space where rgb -> xyz normal components from -1 to 1. This seems about as flexible as you need for the next generation of complex intuitive surface descriptions before you go to all out shader support but it's not the way things seem to be going. You can do similar things with more basic commands like DOT3 texture and combine the results in very interesting ways while evidently being easier to implement. Unfortunately is much more difficult than the rest of OpenGL to program which is my biggest concern. I understand the theory that OpenGL needs to become the SIMD instruction pipeline for multipass, but that shouldn't be the sole direction for fragment lighting. People who suggest DOT3 product for fragment lighting seem to be out of touch with most of the developers who write OpenGL applications. DOT3 & register combiner implementations clearly fail the "hello world" test. To develop applications in OpenGL with vertex lighting you don't really need to be a T&L expert, probably the most complex part for newbies is computing a surface normal. Developers want to focus on their application code not their graphics code and OpenGL hardware developers need to remember that. Games are a bit of an exception with higher eye candy standards and more resources invested towards graphics expertise and implementation. This goes all the way back to software rendering and even VGA mode X hacks.

I think there's room to wrap the existing supported extensions in an overall embodiment which allows applications to easily code fragment lighting at the OpenGL level. This would greatly increase the uptake of such things. I'm not talking about shaders & shader compilers here, I'm talking about something like fragment lighting which is almost like a glu library and lets you code at a conventional OpenGL level and abstracts the ugly details. There's a strong need for this kind of functionality at the OpenGL level. Not DOT3 & register combiners, not shaders and not a scene graph, just let me add fragment lighting easily to my OpenGL applications and you worry about the details.

I have my issues with the fragment lighting extension spec too. I think it sufferes from the fact that it's completely orthogonal to the existing lighting path. That is highly redundant, especially if you want applications to quickly use the feature. I really believe that a glShadeModel token could have done similar things, heck with the exception of an additional spotlight cutoff term (via a separate extension) the fragment lighting parameters are 100% identical to the vertex lighting parameters. Under those circumstances it's foolish to replicate the entire vertex API with glFragment*, it's quite unbelievable that anyone thought it was a good idea. Sure it exposes the hardware in interesting ways when wired into the rest of OpenGL, but there were other options besides glShadeModel. Additional glLight target tokens would have been more flexible while replication the vertex light model and materials on the fragment side. Sure it's not as flexible as completely orthogonal API calls but a darned site more useful and faster/easier to develop to.

Anyway, I don't expect to see fragment lighting soon but some work on putting the low level DOT3 texture et.al. calls in a more user frindly package would set a nice precedent. I think that would be a good way to go for the future, adding low level powerful interfaces underneath higher level easy to use abstraction. We're already heading that way from the other direction with the programmable vertex extensions. You can see that the vertex lighting is implementable on top of the programmable vertex extension (and probably will be in OpenGL implementations) this is the high level abstraction which existed before the low level instruction API. Looking at the fragment lighting side of operations we effectively have the programmability and it's getting more powerful, what we need is an interface which does the obvious intuitive thing on top of it which software developers can use to get better quality intuitive fragment shading without setting up textures and register combiners to compute the lighting terms themselves. Otherwise you're simply ignoring the abilities, desires and requirements of the majority of software developers out there.

LordKronos
09-27-2000, 01:57 AM
I think I set a precedent here...long posts get long replys. Lets see if I can make this one short (fat chance):

Actually, it can go either way. In my demo bump mapping replaces the surface normal. The surface normal is just used to orient the light and view vectors into tangent space per-vertex. I think this is the most common use we will see of bump mapping in the near future. Also, in case you arent familiar with the register combiners, they do have other inputs..primary & secondary color, 2 constant colors, and a fog color. These can be used to input additional non-texture parameters. If you need a vector interpolated across a surface, you can range compress it and store it in the primary color. It wont be normalized, of course, but rather than adding phong hardware, if they added square root operations to the register combiners you could normalize it and calculate your phong shading for yourself. This is why I called for more texture units and MORE FLEXIBLE PER-PIXEL CALCULATIONS.

Bump mapping doesnt seem to break down at polygon edge boundaries for me. Not sure what you mean by this. All the bump mapping demos I've seen appear to work fine over polygon boundaries.

If square roots can be made to be done efficient enough to provide them per pixel, put them in the per pixel calc like I said. They can be used general purpose there.

If (or rather WHEN) cards have the power to render scenes using all sub-pixel triangles, I have no doubt they will also have enough bandwidth. Also, breaking obvious surfaces like a floor into pixel level triangle will waste WAY more bandwidth than a single poly lighted and bumped with textures & register combiners. Oh, and also, if polys are pixel size, are you telling me goraud shading woldnt be good enough for you (interpolating 3 vertices over 1 pixel is surprisingly accurate http://www.opengl.org/discussion_boards/ubb/wink.gif ). Not sure where your argument is going with this.

Please dont base your entire argument against me on the basis of texture units. My argument (and request) was for
(1)more texture units
(2)more flexible per-pixel calculations

Some things can only be done with (1) other things can be done using just (2), other need both.

And dont tell me that my argument doesnt hold up. In fact, I think you will see that my argument is exactly what will happen with next years cards (including x-box) and a lot of next years games.



[This message has been edited by LordKronos (edited 09-27-2000).]

LordKronos
09-27-2000, 02:10 AM
dorbie:
you had a lot to say there. Most of it was based on making the api easier to use. I do agree with you, and I think it will get there eventually. Once everyone can support the current crop of top of the line features, I wouldnt be surprised to see the ARB (eventually...) release new lighting model commands/constants that will actually configure the per-pixel shaders for some general purpose effects in a single command.

foobar
09-27-2000, 09:55 AM
Lets see if I can dribble on as long as the rest of you...

In your demo it replaces the surface normal because the surface normal is the same all over the surface: because you don't have phong interpolation! This is also the reason it breaks down at polygon edge boundaries because there is a discontinuity in the surface normals. You fix this by matching and weighting normals to each vertex to achieve the desired surface but your method could not take advantage of this.

Yes interpolating 3 vertices over one pixel is surprisingly accurate and it is also doing 3 times as much work as you need to! You won't have to break 'the floor' down into sub-pixel triangles to get a normal per-pixel: with phong shading that is the whole point! (We are arguing the same side a bit here).

I think you are suggesting the interpolation of dot-products over the surface using register combiners but you need to interpolate the normals. If you don't you will miss stark highlights in the centre of polygons because none of the values in the centre can be greater than at the three normals: proper phong shading and bump-mapping gives much better image quality which is one of the good things about openGL.

Yes I agree that what you say will happen with video cards over the next few years. But be fair: 'becuase NVIDIA and Microsoft says so' is not a good argument is it? Adding more texturing units is inevitable and is not even a question for extending openGL because it supports arbitrary multi-texture already. But if you are going to waste multi-texture doing per-pixel lighting you might as well stick it in the lighting piepline and have all those texture units for free.

Finally, I realise that you have an un-healthy relationship with your register combiners, and yes they are useful but it would be better to separate lighting and texturing operations so that you can configure an arbitrary illumination equation with arbitrary multi-texturing. That is the most efficient way. If I want to use phong-interpolated, physically based lighting equations, anisotropic-mip-mapping with a couple of levels of multi-texture and reflection maps I will need about 16 texturing units which is ludicrous considering current cards still only have 2!

Plus your method is fine for games where about half the team is working on optimising the graphics but what about other software where the developer would just like to be able to turn on phong-shading? Basically what you are suggesting is we should have phong shading capability in the hardware but it should be very complicated to use: where is the sense in that? That is not what openGL is for. Maybe you should be using Direct3D instead!

LordKronos
09-27-2000, 05:31 PM
My demo uses flat surfaces, but there are demos on the nvidia site that use curved surfaces (a torus) which works quite fine..nothing breaking a poly boundaries. The way to do this is to specify a tangent space light or half-angle vector per vertex which then gets interpolated per pixel. this can actually be placed in the primary or secondary color to avoid a texture unit. However it will not get normalized this way. To do so I use a normalization cube map. If the register combiners were more flexible (inverse square roots) you could do the normalization without a texture unit.

I wasnt arguing in favor of sub pixel trianges, someone else brought the topic up (though I cant seem to find the quote???), I was shooting it down.

I am talking about interpolating vectors across the surface (whether through normalization cube maps, currently done, or through per-pixel normalization calculation on future theoretical hardware).

Unhealthy relationship? I prefer to call it recognizing a superior technology relatively early on and embracing it.

If you want to separate lighting and texturing, tell me how you plan to do bumpmapped lighting with a texture unit? please dont say highly tesselated geometry or I will have to call you a bandwidth murderer.

No, I did say above that eventually there will probably be a single constant or command that will configure the register combiners for your phong lighting equation. This is currently how I suspect nvidia does GL_MODULATE and other texture operations.

In closing:
(1)more texture units
(2)more flexible per pixel calculations

foobar
09-27-2000, 08:13 PM
I understand now, sorry...

...Not only do you not want to add phong shading you don't even want to use the existing lighting facilities!? All your lights are defined using textures and register combiners as well right? So do you need multiple passes for multiple lights? And when the day comes when you want to use a more complicated illumintation model you will need three or four texture units per light? Plus all the other effects you will want to be using then as well: we will be talking 50+ texture units.

Embracing 'superior' technology is always unhealthy.

Bump mapping breaks down at polygon boundaries unless the normals match - its a mathematical fact.

Bump-mapping with a texture-unit? This is not about NVIDIA's h/w but they could quite easily grab extra input from the registers to perturb the normal just before applying the illumination equation. Whats the problem?

Of course NVIDIA does GL_MODULATE with the GPU's registers. They didn't just put in extra registers for the likes of you mate!

In case you had forgotten this is about OpenGL - not what is fashionable in NVIDIA's hardware labs. The original question was whether to include phong-interpolation in OpenGL - you obviously haven't read it because I have had to repeat myself! You don't even use the openGL lighting pipeline so you don't really need to care do you? Try to be objective and stop cow-towing to NVIDIA. If you want a job just ask them, they can only say no!

LordKronos
09-28-2000, 03:26 AM
Jeeez, talk about being a rude. Im stating my point of the argument and you have to come back with personal insults. Can we please have a discussion without personal attacks? (Just a hint: people who cant debate without resorting to personal attacks often destroy their own credibility)

In case you arent aware, there is currently no per-pixel lighting pipeline in the hardware (at least not consumer level, I have no knowledge of $5000 professional hardware). The hardware calculates all lighting per vertex and uses that to modify the per vertex color, which then gets interpolated in the per pixel pipeline.

I suspect you havent done any real work with register combiners, because you dont seem to understand fully what it is capable of. I get the impression you think its just some texturing unit, but you can actually do some things with it without having any texture units activated at all. As is, you can calculate reasonable approximations of phong lighting a smooth surface without any texture units enabled. Of course, the normal, light, and half angle vectors wont be normalized. But even that could be done with MORE FLEXIBLE PER PIXEL CALCULATIONS!!!!!

Oh, and when the day comes when I want to use more complicated illumination model, I will be able to do it because I asked for flexible per pixel calculations. However, how do you plan to do it with your very specific phong lighting hardware?



Embracing 'superior' technology is always unhealthy.

Yeah, I bet all those people who embraced multitexturing when Voodoo 2 came out are kicking themselves now. What a mistake that turned out to be. Seriously, I cant see what is unhealthy about it. So often inferior products set the standard because of clever marketing gimicks and such. Is it such a bad thing to embrace superior technology and try to promote it so that maybe it wont be replaced by something inferior but well marketed?


Bump mapping breaks down at polygon boundaries unless the normals match - its a mathematical fact

That is not a fact. It is ONE of SEVERAL ways to look at the lighting model equations. Its a FACT Ive seen curved surfaces done and the bump mapping works perfectly over polygon boundaries. You know the saying "there is more than one way to skin a cat"? Well, there is more than one way to light curved surfaces. You think you have to leave the light in the same position and "bend" the normals. But you can also "bend" tangent space by moving the light as you go across the surface while leaving the normals pointing the same way. You may or may not understand the math behind this, but let me guarantee you IT WORKS. And that is a FACT.



Bump-mapping with a texture-unit? This is not about NVIDIA's h/w but they could quite easily grab extra input from the registers to perturb the normal just before applying the illumination equation

Again, you seem to be thinking like the word "register" equates with "texture". In any case, how do you propose they get this per-pixel perturbed normal, because you want the lighting equations and the texturing to be separated. You are now arguing against one of your previous points.

And I do know the whole argument. This does apply to OpenGL and not just NVIDIA, because I'm sure in due time other hardware will adopt similar functionality. Yes, I do use the current lighting pipeline when it is sufficient for my needs. However, the lighting pipeline operates per vertex, and fails to be able to support per pixel bumpmapping and such. And to be objective, you have to know both sides of the argument inside and out. I may be wrong but my perception is that you have little to no hands on experience with register combiners and per pixel lighting.

foobar
09-28-2000, 01:56 PM
The only rude one here is you: I began with a short comment and you come back with huge patronising essay, I write a short reply and then I get another huge 'article' that picks and chooses its argument conveniently without really being constructive, hence I decided to wind-you-up-a-treat, which has been very easy.

The word register does equate with texture: when you load a texel into the texturing unit where do you think it is stored? In a REGISTER. In fact this is the fundamental problem with your argument: given that computers today are severely limited by bandwidth and that there is no sign of this not being the case in the future there is no credible argument in replacing calculations with lookup tables which is essentially what you are doing. One texture unit per-light?Please! Unless there IS something I don't know about register combiners you can approximate phong shading with them as you said, and maybe one day you will be able to do a square root but you still can't apply arbitrary lights to that equation without using the texture units to specify the lighting equations which is what I would like.

All I originally said was bump-mapping breaks down at polygon boundaries: which it does - any function that relies on the surface normal breaks-down when the surface normal is discontinuos and I know you know this so why are you having a problem with it?

If you read the previous posts you would understand how to implement a different illumination model when you have phong-shading without requiring 4 texture passes per-light source.

Why don't you take an objective look at the real problems with what you are suggesting and think of a solution. The only thing I am suggesting is adding phong shading to openGL and you are going on an 'NVIDIA is my Saviour' bender.

No I have never used register_combiner extensions but I understand how they work thank you very much (the graphics pipeline ain't rocket science).



Yes, I do use the current lighting pipeline when it is sufficient for my needs. However, the lighting pipeline operates per vertex, and fails to be able to support per pixel bumpmapping and such


EXACTLY.

[This message has been edited by foobar (edited 09-28-2000).]

LordKronos
09-28-2000, 04:45 PM
OK, I am posting this reply, then I am done. I have made my point. I dont think I can convince you foobar (you certainly cant convince me), and I hope at least a few people have seen my point enough to be open minded and investigate register combiners for their full capability. I do believe that their style of per pixel calculation will become standard on future hardware because of their power.


Originally posted by foobar:
The only rude one here is you: I began with a short comment and you come back with huge patronising essay, I write a short reply and then I get another huge 'article' that picks and chooses its argument conveniently without really being constructive, hence I decided to wind-you-up-a-treat, which has been very easy.

Sorry if I offended you. I didnt realize long postings were rude. I was just trying to state a detailed account of my position on the topic, and it seems like a few people understood where I was coming from. From now on, I will make it my point to drop in on every thread and chastize anyone who posts more than 2 paragraphs.


The word register does equate with texture: when you load a texel into the texturing unit where do you think it is stored? In a REGISTER.

No. Thats like saying the term "variable" equates with "float", just because a variable can be a floating point variable.


All I originally said was bump-mapping breaks down at polygon boundaries: which it does - any function that relies on the surface normal breaks-down when the surface normal is discontinuos and I know you know this so why are you having a problem with it?

This is what you believe to be true. I KNOW otherwise. You make your claim based on your knowledge, but there is another technique that achieves the same results by attacking the problem from a different perspective. There are several nvidia demos that will show bumpmapping of curved surfaces in action, and show that bumpmapping can be done accurately on curved surfaces over polygon boundaries. Go to:
http://www.nvidia.com/marketing/developer/devrel.nsf/TechnicalDemosFrame?OpenPage

and look at "Simple Dotproduct3 Bump Mapping" or "Bump Mapping". Both show this.


No I have never used register_combiner extensions but I understand how they work thank you very much (the graphics pipeline ain't rocket science).

Then I wouldn't say you have a good enough understanding to judge them. Yes, sure it looks fairly obvious from the outside, but until you really play with them and see how far you can push them, you really wont realize how truely powerful they are.


One last thing that just came to mind: The call here is for hardware to perform phong lighting. I call for a flexible system capable of doing many things including a phong lighting model. However, I was wondering why the focus specifically on phong lighting. It seems to me that "phong" has become a buzzword that people just throw out. But there are other lighting models. Another quite popular lighting model is the Blinn lighting model. Without trying to argue for any model in particular, I would say every lighting model has its high points and its low points. I am asking for a per pixel shading system capable of many things including goraud shading, phong lighting, blinn lighting, and much more. In fact, it is feasible that someone could introduce a new lighting model tomorrow that tops everything else, and a flexible system would be able to implement that model too. All I ask for is flexibilty. That way, it is in your hands to decide how YOU want to light YOUR polygons in a manner that best suits your application's quality and performance requirements.

There. I'm done. Feel free to respond to this if you like. If anyone wishes to discuss the topic with me further, feel free to contact me offline. My ICQ number is in my profile. I would be happy to discuss further, but I feel the usefullness of posting more on the topic here has reached its limits.

[This message has been edited by LordKronos (edited 09-28-2000).]

foobar
09-28-2000, 06:15 PM
I will post a response simply because I know you don't understand. I can't convince a blind man that the sky is blue but I will have one last go...

Here is my first post so you can see how much time you have wasted getting confused:

I am guessing that adding more texturing units is more expensive than adding phong shading - yes they are more flexible but as soon as you decide to do complex per-pixel lighting you loose all your texture units!!

If phong shading was added and then we had something like register-combiners on the lighting pipeline you could save all your texturing units for other effects. And since video-memory bandwidth is currently the performance limiting factor adding more texturing units may not be the answer: eventually you have to stop reading/writing to/from memory and instead do calculations on the GPU to go faster: phong shading is the perfect place to do this and will lead to dramatic increases in image quality without wasting texturing bandwith.


The little things first:


I do believe that their style of per pixel calculation will become standard on future hardware because of their power. They are registers in a chip. Chips have to have registers. This is not a 'style' it is how texture-units work! You can also use NV_vertex_program to do the same thing with the T&L hardware. You could do the same with any graphics chip. I was not arguing against more flexible register combiners.

No! Variable does equate with float? It also equates with other types but in this particular instance it is a float.

Yes! You can bump-map curved surfaces properly. Of course you can because the adjacent normals are almost identical, this was not what I was saying. I said "with phong shading you will get much better bump mapping with lower resolution meshes" where the triangle normals will have to be distorted to make it work.

Another quite popular lighting model is the Blinn lighting model? I have already had this argument with John! Must I repeat myself again. Phong-interpolation is not a lighting model it is an interpolation model. The lighting model is applied as a function of the surface normal: phong interpolation gives you a surface normal per-pixel with which to calculate the lighting model. Once you have accurate per-pixel normals then you can't go any further so Phong-interpolation is a step towards the peak of polygon graphics.

The main issue:

Through-out this argument you have avoided rubutling the fact that your method requires a texture unit to 'look-up' the lighting equation which wastes bandwidth. This is indisputable. In fact the only real difference of opinion is that I want the GPU to calculate the lighting model for me and you want to supply it as a texture map. The best way to make hardware go faster is to reduce texture-fetches/memory reads and replace them with calculation: this is what texture compression algorithms do, this is why powerVR chips have a much lower bandwidth requirement than any other rendering technology, this is the whole concept behind the Playstation2 architecture. Take a look at the GScube. You couldn't daisy chain 16 NVIDIA processors together like that, it would get you absolutely no where. PowerVr chips already do 8 pass multi-texture and NVIDIA's method is pushing 2. If bandwidth were not an issue you could store all the results of any calculation in a 2D lookup table and fetch the result in one clock cycle but this will not happen anytime soon.

Just incase you do read this I apologize for saying you were sucking up to NVIDIA. But that is what it appeared like and that is why it is 'unhealthy' to embrace new technology. Technology is the present and subjective. We are talking objectively about the future.

PS If anyone has read this far, congratulations!

[This message has been edited by foobar (edited 09-28-2000).]

MrMADdood
09-29-2000, 09:00 PM
PS If anyone has read this far, congratulations!
i have!! http://www.opengl.org/discussion_boards/ubb/smile.gif
ok.. in closing...


"with phong shading you will get much better bump mapping with lower resolution meshes"
absolutely right. BUT it can be sorted with an acceptable error with the method described by LordKronos, using textures. anyway... the idea is: if all there is missing for implementing phong shading with register combiners is an inverse root normalization... i have a paper about that somewhere(not for OpenGL)... i remember that the guy used a 2 degree taylor approx. for the root function, and it worked well enough. (beacuse of the nature of the source data)
that should take the best out of both worlds: texture flexibility with hardware normalization

this is why powerVR chips have a much lower bandwidth requirement than any other rendering technology
i remember an article about Nvidia researching Power VR-like technology...

and foobar... you were kinda rude... ask around http://www.opengl.org/discussion_boards/ubb/smile.gif

foobar
09-30-2000, 09:45 AM
Everyone would be using tech similar to powerVR's if it wasn't patented! When the patent runs out I expect we'll see that a lot of people will suddenly think its a good idea http://www.opengl.org/discussion_boards/ubb/smile.gif. I suppose you could get round it by calling it a video memory write cache for display lists rather than on-chip z-buffer or something.

Ask around? Do you people have a secret club or something? I Hope you all enjoyed being offended as much as I enjoyed offending you http://www.opengl.org/discussion_boards/ubb/smile.gif People should also note that not reading people's comments thoughroughly before replying is rude. And if you don't understand you shouldn't reply at all as far as I'm concerned.

And finally; unfortunately you are still mising the point about register combiners and I am tired of explaining, maybe I am not good at explaining but it seems you are not trying very hard to understand. Ignorance is bliss after all. (am I being rude again? - oh well).

[This message has been edited by foobar (edited 09-30-2000).]

[This message has been edited by foobar (edited 09-30-2000).]

grady
09-30-2000, 09:10 PM
http://smilecwm.tripod.com/net5/greenchainsaw.gif


Now that i have your attention



Originally posted by grady:
2 questions-

i don't want to break the technical role this thread has got on, but i'm new to programming graphics and i was wondering what would happen if you HAD to have a high resolution mesh for an object (ex. a huge jello jiggler). Would the high resolution mesh make the phong shading go even slower, of course it will be slower because theres more complexity in general (i guess) but does it get exponetially slower?

Its fine i guess in lordkrohnos's demo where you have tetrahedrons and a big square but what if you want to make a big intricate dragon or animate a satelite dish? can you phong shade the flat surfaces made of 2 triangles and gourad shade the fine meshes in the same scene (and even after that still have it look consistent)? surely not huh? thanks.

Firestorm
10-02-2000, 07:22 AM
Foobar, you ARE rude and completely missing the point from LordKronos.
Your comments are so irrational at times it makes my skin crawl..

You want to be right soooo much don't you?

I agree with you on one point tough, non-linear interpolation of normals is usefull.
I don't care much about phong tough..

foobar
10-03-2000, 03:09 PM
Non-linear interpolation of normals? - not anything to do with what I am saying.

I have replied to LordKronos personally. If anyone else has anythig constructive to add please feel free.

grady
10-03-2000, 03:15 PM
bink http://smilecwm.tripod.com/net4/popworm.gif -wug wug wug

what if you neeeeeeed a fine mesh for a smooth jiggly surface. is phong shading bad for that?

foobar
10-03-2000, 03:19 PM
If the hardware is capable of calculating RESOULTION x (number of frames/sec): where all pixels are phong shaded then it should not be a problem. (ignoring overdraw of course.) You could turn of phong shading for high resolution meshes anyway just as easily as you can turn off lighting in openGL now.

Phong shading is not exponetially slower - it is a polynomial algorithm: its complexity is dominated by the complexity of a square root per-pixel as LordKronos said.

[This message has been edited by foobar (edited 10-03-2000).]

LordKronos
10-04-2000, 04:26 AM
I said I was done debating the topic, and I am. Foobar and I have had some communication offline (I thought the postings were getting to be degenerate, so I contacted him personally) and we seemed to have resoloved our points of view. Now without debating what we do/dont need...


Originally posted by grady:
what if you neeeeeeed a fine mesh for a smooth jiggly surface. is phong shading bad for that?

Phong shading would never be bad from a visual standpoint. It may have performance implications, but I dont think thats what you are asking about. Phong shading a fine mesh should not look any worse than anything else. However, if the mesh gets fine enough, phong shading and gouraud shading should begin to look identical. If a triangle is only represented by 3 pixels (one at each vertex) or less, you dont have any chance to even see the interpolation error of gouraud shading. Also of note is that, as a triange gets smaller and smaller, in almost all cases the lighting of all 3 vertices tend to become similar, thus there is very little difference that needs to be interpolated. Another way to think of this is to say that phong shading a coursely tesselated surface is essentially the same as gouraud shading (or even flat shading) a very finely tesselated surface.

Does this help?

Firestorm
10-04-2000, 09:24 AM
Originally posted by foobar:
Non-linear interpolation of normals? - not anything to do with what I am saying.
must have been sleeping when i wrote that.
i ment non-linear interpolation of vertices, or interpolation of normals.
in retrospect i'm not sure what i ment http://www.opengl.org/discussion_boards/ubb/wink.gif
god, i really need to get more sleep
haven't slept in more than 24hrs..
that doesn't do wonders for concentration i can tell you..

[This message has been edited by Firestorm (edited 10-04-2000).]

Humus
10-10-2000, 06:57 AM
Originally posted by foobar:
Phong shading is not exponetially slower - it is a polynomial algorithm: its complexity is dominated by the complexity of a square root per-pixel as LordKronos said.


A square root per pixel isn't impossible to do in a single cycle. With 3dnow you can do a square root approximation (to 13-14 bits correct mantissa, you don't need more) in two cycles with a lookup table. So, at a graphic card you could probably do this in a single cycle.