What do you experts think of this guys statement?

First of all, the high-end video cards on the market right now aren’t having a problem because of the number of triangles that Quake III draws. There are two other bottlenecks: fillrate, and bus bandwidth.

My card does fine with about 60 thousand triangles (with lightmaps) at 125 FPS, as long as it’s drawing a static scene. In this case - and this is the case with Quake III maps - the geometry is sent to the card in a compiled vertex array. When it needs to be rendered for any given frame, the game doesn’t have to send the triangles again.

Dynamic/moving/animated/whatever models need to be updated to the card each frame. That takes bandwidth.

That won’t be the case with vertex shaders, which the Doom III engine will take full advantage of. You send the data once in a compiled vertex array, and for each frame, you send a little bit of data in a constant pool (keyframe stuff, timestamps) and have a programmed vertex shader animate the model for you on the card. You use very little bandwidth to animate a very highly-detailed model.

Another bandwidth killer is dynamic textures. Per-pixel shaders will move this from the CPU to the card.

So throw out your Quake III examples, because the Quake III/TA engine doesn’t do those.

A related issue: the CPU will be doing less graphics-related computation because of vertex and per-pixel shaders, which could translate into another speed increase.

Another limiting factor (something that drops FPS) is overdraw. Having more detail on a model doesn’t significantly increase the amount of overdraw (just a little because the model can be a bit more bumpy). So you can nearly forget that as a possible slowdown.

Something else you need to keep in mind: technology advances in speed - at least in CPUs and GPUs - are measured in the frequency in which the speed doubles. For instance, CPUs generally double in speed every 18 months. NVidia is expecting their cards to double in speed every six (yes I read that in an article but I can’t remember where it is) - and it’s doable because of how easy it is to parallelize its tasks. With NVidia’s purchase of 3dfx, they’ve acquired a lot of that sort of technology.

You guys should know more than me about this. This guy is saying that DOOM 3 will have character models that are 250k each. At 60 fps just one model alone would require 15 million tris a sec!

What do you think? I’m having trouble finding this possible by the time DOOM 3 comes out.

[This message has been edited by WhatEver (edited 05-05-2002).]

There is no way the character models will be 250k each.

Carmack has said a Geforce3 will run Doom3 at about 30fps. With everything else going on I wouldn’t expect the engine to be pushing much more than 6 million tris/sec which means the poly budget per frame is about 200,000. Maybe 50-100k for the level and the rest for the models.

Their source artwork (made in Maya) has been said to have about 250K polys/character. However, they simplify the model a lot for rendering, but the detail that gets thrown out is still preserved in the bumpmaps, thus making the quality hit less perceptible.

– Tom

P.S.: Where’s this quote from?

You can respond to the guy here: http://www.burial-grounds.com/ubb/Forum1/HTML/008413.html

He is making me doupt what I know. Am I wrong???

Let me know so I can re-itterate what I know.

Oh yeah, I’m WEAT over there, look for the yellow text. The guys handle is [AF]haste.

On this web site http://doomworld.com/files/doom3faq.shtml#What%20sort%20of%20engine%20will%20Doom% 203%20use?

you can find some details on how they possibly do it. You need to scroll down where they talk about the details.

I am in way saying this is what id really does.

[This message has been edited by Gorg (edited 05-05-2002).]

Good read Gorg! The guys explanation is convincing…but how can they execute the calculation for one model with a lot of tris and then apply the result to a low tris model? I would think just that alone would be processor intensive…wheather it’s the CPU or GPU.

[This message has been edited by WhatEver (edited 05-05-2002).]

Originally posted by WhatEver:
[b]Good read Gorg! The guys explanation is convincing…but how can they execute the calculation for one model with a lot of tris and then apply the result to a low tris model? I would think just that alone would be processor intensive…wheather it’s the CPU or GPU.
B]

The explanation in Gorg’s link pretty much explains WHY they’re using the high-poly models. The HOW is almost certainly by using a technique called ‘Appearance Preserving Simplification’ - basically a preprocess which takes a high-poly model as input and outputs a low-poly model plus a normal map (or bump map, or whatever). There’s the SIGGRAPH paper where this was introduced available at http://www.cs.unc.edu/~geom/APS .

Rich

[This message has been edited by R-C (edited 05-05-2002).]

That’s amazing!

I have a lot to learn still…but I guess it never ends

It’s similar to something I did with a waterscape. I ran a high-res tileable cycling water simulation over 256 frames, and saved the results into 256 normal maps. Then, at runtime, I run a low-res water simulation up to the horizon, using the animated high-res bumpmaps to do environment bumpmapping. The end result is something that looks strikingly high res.
They’re using the same principle here, I suppose.

I’ve always wondered how to do that technique. Thanks for posting the link R-C!

-SirKnight

So are they going to start incorporating some of these cool new methods into todays modelers? I just bought LW[7] and it sure doesn’t render complex scenes real-time very quickly…or is the implimenation not compatible with a system that’s designed to generate models?

[This message has been edited by WhatEver (edited 05-05-2002).]

Originally posted by WhatEver:
[b]So are they going to start incorporating some of these cool new methods into todays modelers? I just bought LW[7] and it sure doesn’t render complex scenes real-time very quickly…or is the implimenation not compatible with a system that’s designed to generate models?

[This message has been edited by WhatEver (edited 05-05-2002).][/b]

By ‘render complex scenes real-time’, do you mean the interactive preview? If so, then this kind of stuff isn’t of much use - the conversion times are likely to be of the order of minutes/hours (not seconds), so don’t allow you to alter a model and immediately see results. There could be some use in using this kind of technique for objects that you’re not altering (background scenery, for example), but in practice it’s probably easier just to hide or only show the bounds of the stuff you’re not interested in.

Basically, it isn’t that its not compatible, just too slow - use as a preprocess, but not interactively.

SirKnight: You’re welcome. If you’re really keen, I’ve got the PhD thesis it came from …

Rich

[This message has been edited by R-C (edited 05-05-2002).]

SirKnight: You’re welcome. If you’re really keen, I’ve got the PhD thesis it came from …

Cool, id like to see that too. The more graphics papers i can get the better.

-SirKnight

I read some of the Appearence Preserving Simplification white papers. It sounds like a method that’s used to preserve the models original shape without much loss of quality when reducing the triangle count.

The artical pointed out by Gorg mentions a method that uses a high res model but only uses it to generate the nessasary shadow and color information to apply to a simplified low tri model. So that way only like 1.5k models will turn out looking better.

That’s how it sounded to me anyway.

All this stuff is very interesting.

So the technique is something you do before hand…not real time? So it’s something you would do in your modeler?

SirKnight: PhD here (try any of the links in the top right corner; at least one usually works …) http://citeseer.nj.nec.com/cohen99appearancepreserving.html Chapter 5 has all the bump map stuff; the rest is pretty much a prelude (LOD techniques, error metrics …). Enjoy

Whatever: It’s definitely something you do beforehand -
Modeller -> Generate LowPoly Model/BumpMaps -> Engine

You could conceivably add it as a back end to the modeller (run after completing the model, but before exporting it), which would at least give artists a way of specifying various parameters controlling the bump-map generation etc.

Rich

carmack posted recently on the poly counts of doom3 (i forget but i think it was about 100-200,000 polys a level (ie not per frame)
unreal 2 will have higher poly counts but looks a lot worse (in fact i can hardly see the difference from ut)

its not a question of polygon counts, more a question of what u do with them!

I think Doom3 will be using some advanced LOD-algo, at least for the characters. The bumpmapping-technique used in D3 sounds interesting.
Btw. nice thread!

Thx dbugger. I had two things in mind when I created this thread. First it was to get some answers on the subject, and the other was to provoke some thinking for me and possibly some others.

R-C, I understand now about the poly reducing method, thx for clearing it up.