new shading mode

well… goureaud is great and all… but i think current processors are ready for something new…

goureaud is a linear interpolation, therefore artifacts are induced in several zones… but what if we used a non-linear interpolation? we just need a few more normals… and… um… help here

you could, or you could just bump up the resolution of your mesh. Think about it. If you’re not happy with the error of the surface function by aproximating it with verticies spaced 1 unit apart, aproximate it with verticies space 0.25 units apart. The error is less, and all without making a new shader model.

cheers
John

well… that is exactly what we want to avoid
as far as i know, the light fading model uses a curve of 1/r^2 , which isn’t THAT complex… if we aproximate, we can use just 1 quad for that big floor scene, and use the extra bandwidth somewhere else
plus, goureaud is a linear interpolation, therefore uses 2 intensities in each calculation. if we use 3 intensities we get a cuadratic interpolation, etc.
i tried to figure it out myself, but got caugt in some redundancy… the fragment’s brightest point may be needed… dunno
also, there might be some precision issues with really huge fragments.

Yer… it’s all a trade off. Do you want to spend money on silicon for the more complex shader model, or money on improving the bandwidth between the graphics card and processor memory? For my money, I’d vote for the bandwidth: everything gets to use it, regardless if it uses the funky new shade model or not.

Approximate the floor with one huge quad, huh?

Well, I can’t see how this would work, even with the kind of interpolation you suggested. Say you got a room, and a light in the middle of the floor (well, abit above the floor that is ). If it’s placed in the middle, then the intensity in each corner will be the same, and the floor will be flatshaded if you use gouraud. But isn’t this going to happen even if you use some kind of quadratic interpolation too? Doesn’t it have to know that the intensity in the middle is stronger (whish is easily solved by boosting the resolution of the floor to several quads, and you get more points where you can calculate the intensity)?

Quadratic interpolations need 3 values instead of one. that means you would prolly have 1 extra intensity in the middle between each pair of vertexes. That should pick up the light in the middle and generate a nice light curve with the even light in the extremes.

[This message has been edited by MrMADdood (edited 08-16-2000).]

Hey guys!
What you all need is perpixel lighting or lightmapping. However, in vertex shading mode I think that we should replace goraud shading with something similar which is perspective correct (I think that’s called phong shading …).
I surely agree with MrMADdood that we shouldn’t need to more geometry than needed to get it realistic, a cude should be 12 triangles and not tesselated into 1024 triangles to avoid artefacts. I vote in favor for silicon over more bandwidth.

Hey, I vote for more bandwidth, and if I want phong shading, I use WIN_phong_shading … And if it’s not available, I use the higher bandwidth to pass more faces

Well … higher bandwidth is good too. But Goroud sux, it’ll always produce artefacts, they just gets smaller with higher tesselation. It’s OK to use a lot of triangles to make a sphere, but needing to use a lot of triangles to get a triangle really stungs i my programmer heart …

why does it sting your programmer’s heart? it is no different from approximating the intergal of a function by simpon’s rule, for example, and THAT’S a computer/maths science thing.

Dividing something into smaller versions of the same thing to minimise the aproximation error across the bounds of it is not a hack; it is done time, and time again. Terrain height fields work on the same principle, for example.

cheers,
John

cool!!! i didnt know there was Phong Shading avaiable!! (but i guess its slow as hell)

i gotta try it, anyway. (is it an extension? doesnt look like one)

about tesselation to improve detail… yes, that is the current method… but cmon, we’re trying to improve things here, right?

well… ill dig in the archives to find fast phongs algorythms… if i find anything worthy, will post again
<EOF>

Keep in mind, that if you implement a “better” shading model, because you want to reduce the resoluion (OK OK, maybe not reduce, but prevent higher resolution), another problem might occur. The actual model itself might look better with this new shading model, but the sillhuet might look bad. A rounded object (sphere for example) can look great compred to a gouraud shaded object. But sillhuet might look like a icosahedron or any other platonic solid. But, if you increase the resolution, and use gouraud, you can achieve the same effect, but without this nasty look_like_a_platonic-look.

Yes, we are trying to improve things here, but we better improve thing we REALLY need

And yeah, it’s an extension…

[This message has been edited by Bob (edited 08-18-2000).]

Well … i know what you all mean and you’re not “wrong” in any way … but you’re missing the point. I’m not saying that we should approximate spheres with less triangles, but what I try to explain is that we shouldn’t need to use complex geometry for objects that doesn’t have complex geometry to get an appropriate shading.
An example:
Say I want to draw a simple room. That 6 quads or 12 triangles. The geomtry isn’t more complex than that and thus I shoudn’t need to use more geometry than that. But with gouraud it’ll look bad as I move around in it. Splitting every wall into 32 triangles might solve the problem, BUT, it also makes things slower especially at cards with no T&L. Using phong shading this problem wont occure and it’s not more complex than texturing and could be used with no performance penalty at hardware supporting it. It doesn’t need higher memory bandwidth either.

Q: Why does almost all games use lightmapping?
A: Because vertex shading sux.

I tried vertex shading previously, but was never happy with it, now I use lightmapping and it works a lot better. But if the hardware took over this work of doing the lighting correctly it would be a lot better.

But your argument is almost self defeating: “phong shading won’t be much slower on hardware that supports it”, isn’t much of an argument against “high resolution meshes isn’t much slower on the T&L h/w that supports it”. What you’re advocating is more complicated shader models (more transistors, more silicon, yadda yadda) just so you don’t have to divide your mesh, and thus using silicon that already exists.

but i should clarify: new shader models are certainly a proverbial Good Thing. Don’t get me wrong on that. But, I argue, one shouldn’t be adding new features just because people are to slack to cleverly use existing features.

You’re still missing the point!
Ok, I can use existing features to get around the problem, BUT, i can implement new feature which SOLVES the problem, which is better. And this “more transistors, more silicon” etc stuff … well, today we add one, we add two, we add three texturing units … we add one, we add two, we add four pixel pipeline … we add, add …
A texturing unit is more complex than a phong shading unit!! Why can’t we add a phing shading unit??
Phong shading is superior to gourad shading in all aspects, it does not need more memory bandwidth and may if smart implemented not be slower!

>But, I argue, one shouldn’t be adding new features just because people are to slack to cleverly use existing features.

Isn’t this holding back productivity of the developer? I have to do a lot more work to get the job done, and the GPU has to do more work too!

Gourad shading was good back then when we has software renderer, since it’s faster for software to handle … but today it’s an obsolete technology.

I agree with you that phong is better than gourad, but this is a debate and it doesn’t have a correct answer.

You have to appreciate that introducing more complicated hardware will have trade features for “something else”. What that something else is up the the hardware designer’s.

This is a crude argument, but think about the philosophy behind a lot of the RISC chips: they’re designed to be lean and mean, and thus run at a faster clock rate. True, Intel and AMD have been able to crank up the clock rate of their complicated processors, but at the expense of adding a tonne of remarkably clever hardware tricks. Alpha, on the other hand, have long had (or, USED to have) a very lean chip that ran at blazing clock speeds. Which is better? A more complicated chip that either runs at a slower clock speed, or a more complicated chip that exhausts much of its transistor budget trying to hide its complexity (but thus becomes more expensvie to build), or a simpler chip that runs at blazing clock rates and is much cheaper, thus opening the possibility of having MANY simpler chips? It’s all a trade…

The VAX architecture used to add features at the drop of a hat, including (if I recall correctly) a single instruction to calculate the CRC of a word. That follows the philosphy of adding more complicated hardware to solve problems that could be solved by a collection of simpler hardware. The VAX used to have a tonne of instructiosn taht were too specialised and were hardly used.

True, phong shading might not be considered to be “too specialised”, but its expense MIGHT be better used elsewhere. Chips typically have transistor budgets, and the engineers have to work out how best to utilise it. Do you spend a considerable part of your transistor count on a new shade model in hardware, or do you use it to add more texturing units? For a application that wants a better shade model, but also requires several texture layers, more texture units might be better. Sure, it’ll need a higher resolution mesh than what you’re suggusting, but on the other hand it might be able to draw the scene in only one pass. Which is the better trade? Higher resolution mesh, or multiple passes? But the budget isn’t just limited to transistors: the cards also have to be sold. If you have a cheaper chip that doesn’t have phong shading, but a kick-ass polygon count, then you might be able to have more than one chip on a card. Alternatively, you might be able to sell consumer level cards with more video memory thus limiting the amount of texture thrashing in a game.

My argument is: sure, phong shading is great, and it’s a fine argument to say we should all be striding boldly forward to new technlogical heights. But, I argue, needlessly adding features is NOT THE BEST APPROACH. The shader model CAN be improved by increasing the resolution of the mesh. Indeed, as Bob points out, this is better for the object’s silouhette. If phong shading isn’t implemented in hardware, than this allows scope as a given price point to add other features like more graphics pipes and video memory. Which is better? Just because phong shading is a technologically superior shade model to gourad, doesn’t NECESSARILY mean it should be implemented, and just because it explicitly SOLVES the problem doesn’t make it necessarily a better solution, either.

cheers,
John

[This message has been edited by john (edited 08-20-2000).]

after reading everything in john’s (huge ) post, i noticed you are seriously considering FULL phong shading. That wont do, since full Phong is far too complicated.
I tried to figure a way to approximate Phong, then i thought “what the hell am i doing”, i just headed for www.google.com and searched for Fast Phong Shading.
I highly reccomend you read it before posting a reply. Phong can be substancially simplified without losing the looks.

check this siggraph paper: http://www.cs.unc.edu/~gb/siggraph86.html
fast phong takes only twice as much CPU as goureaud…
think about it…

John, you’re right in many ways, and I know that using more hardware means higher price, lower yields and more heat. But, as said before, a phong shading unit doesn’t need to be very complex. Going from 20 million transistors to 20.2 million transistor for the phong shading isn’t a high price for that kind of valuable feature.
And phong shading has a lot in common with dot3 bumpmapping (it’s like bumpmapping without bumps …) already implemented in GeForce/GTS/Radeon etc, you could probably use the same hardware for both with only minor changes in the hardware.
After reading a little at the link provided by MrMADdood I’m quite confident that phong shading could be easily implemented in hardware with at a very small cost.

Furthermore, using phong shading might in many cases reduce the need for more texture layers. With phong shading we can drop lightmapping, which saves both fillrate, bus bandwidth and CPU time (in the case of dynamic lightmapping).

And, as said before, I don’t argue for reducing the polygon count on shapes which are rounded, but I just don’t feel good about drawing cubes with a lot of triangles since a cube is only 6 quads in the real world.

I don’t really feel good about having more than two triangles for a simple square either. Wouldn’t having more triangles complicate and slow down collision detection?