PDA

View Full Version : DOOM III screenshots



KRONOS
10-18-2002, 07:42 AM
This is a bit off topic, but, have you guys seen this?! http://mywebpages.comcast.net/chuk/Doom/ Check out the wallpaper. Bump mapping looks great, but that marine looks amazing!!!!
What do you think?!

John Jenkins
10-18-2002, 07:55 AM
looks good when does it come out?

ToolChest
10-18-2002, 08:10 AM
Looks good, but I think the stills are deceptive. Bump maps look great in stills, but when you can move around the objects you can very quickly start to see the actual geometry of the object. Just wondering how the bumped characters will look in game. One good thing about Doom3, the requirements will probably be high. I canít force people to upgrade to play my game, so when external forces cause a mass upgrade thatís goodÖ http://www.opengl.org/discussion_boards/ubb/smile.gif

Zeno
10-18-2002, 08:20 AM
Actually, in my opinion, the game looks much better in motion. You don't have time to pick out the small flaws, and the animation is superb.

I agree it's about time someone made a game worth upgrading for http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Zeno

zed
10-18-2002, 11:00 AM
it does look much much better.
IMO the first shots from that tokyo video were good, then all the shots afterward eg monster in bathroom looked terrible (esp the shadows).
but now it looks like theyre back on track again (though its prolly only just better textures http://www.opengl.org/discussion_boards/ubb/smile.gif )

>>I agree it's about time someone made a game worth upgrading for <<

there is http://www.opengl.org/discussion_boards/ubb/smile.gif im just sticking in the finishing touches, get ready to KILL EM ALL

V-man
10-18-2002, 11:08 AM
The motions and collision testing are quite important too. I wonder if there will be any of those objects that look like they are floating in mid airjust because 1/10th of the object is on top of another object.

I've seen some clips (on TV) and the FPS seemed interestingly high. You will definitly need the latest system. I say system and not just graphics card!

V-man

josip
10-18-2002, 11:42 AM
Originally posted by V-man:
The motions and collision testing are quite important too.

I've seen some clips (on TV) and the FPS seemed interestingly high. You will definitly need the latest system. I say system and not just graphics card!


According to the recent article in PC Gamer, a ragdoll-type physics engine (I'm guessing similar to what Hitman2 and UT2k3 are using) is already in place as well as per-poly hit detection.

Oh, they are currently running w/ an ATI 9700 at 800x600 resolution. I'm sure that'll improve before release, but it gives an indication of what type of horsepower you'll need to really get the most out of this game. Hope my system is up to the challenge...

knackered
10-18-2002, 01:48 PM
Those really are low poly models. That spindly boss creature looks pretty bad. The geometry reduction using normal maps looks good on big fat characters, but really looks bad with thin splindly ones, in my opinion. I hope these tradeoffs are worth it just to get the shadows....

NitroGL
10-18-2002, 02:10 PM
Originally posted by knackered:
Those really are low poly models. That spindly boss creature looks pretty bad. The geometry reduction using normal maps looks good on big fat characters, but really looks bad with thin splindly ones, in my opinion. I hope these tradeoffs are worth it just to get the shadows....

Better than it running like crap because the model has too many triangles...

knackered
10-18-2002, 02:50 PM
Mmm, but not better than using perspective shadow maps, rather than shadow volumes. He could then jack up his poly count to return us to the number of polys our eyes have now become accustomed to in video games.

dorbie
10-18-2002, 03:06 PM
Beauty is in the eye of the beholder. I think it looks good, showing a screenshot closeup on a double page spread is not typical of the ingame experience.

Zeno
10-18-2002, 03:15 PM
Aww, cmon. I think the monsters in Doom III have as many if not more tris than those in any current FPS. It just jumps out at you more because of the realism of the lighting and textures.

The poly counts will increase in future games based on this engine. I think it's probably a trade off they had to make to be able to trace silhouettes, do mesh skinning, physics, and accurate collision detection on mediocre CPU's.

Another thing to consider is that the poly-count to shadow volume overdraw ratio is not linear. Doubling the poly count would probably only increase the depth complexity a little bit.

-- Zeno

jwatte
10-18-2002, 03:24 PM
> the animation is superb.

Are they finally using actual skinned characters? Better late than never, in that case :-)

zed
10-18-2002, 07:52 PM
actually taking a closer look at the pictures yes they are very low polygon. i thought there were meant to be 5000 polys a charcter in doom3. these look like quake3 characters ie around 1000.
though i suppose 1000tris with decent lighting + textures looks better than 10000 with crappy textures + lighting. actually i dont suppose, i know ive tested it. very similar to why sometimes a raytaced demo at 320x240 looks better than a normal 3d accelerated scene at 1024x768

knackered
10-19-2002, 01:40 AM
Judging by their silhouettes they're using less polys than quake3 models.
Obviously this is because of the need for shadow volumes - but I've been playing around with perspective shadow maps, and they seem to be a better solution than volumes. Maybe in doom4.

Pentagram
10-19-2002, 02:36 AM
I don't think they will go over 1500 polys...
From my personal experience with shadow volumes they are very heavy on the cpu and you can't put the models on the card like ut2k3 does...
Also if you do multiple passes with one poly well you get n*1500 polys then http://www.opengl.org/discussion_boards/ubb/smile.gif
Besides on the polycount question I saw some interview where they didn't actually answer it but only said "the bumpmapping will make up for this" so I won't be that high I guess http://www.opengl.org/discussion_boards/ubb/wink.gif
On the better/worse looking in motion stuff, well it will look better if they find some way to reduce the "shadow popping" on polygons otherwice it can lead to annoying artefacts...

Charles

LaBasX2
10-19-2002, 07:04 AM
Using vertex shader shadow volumes would greatly reduce the cpu burden. The card still has to deal with the silhouette polys of course but you also need to render your model for every light when using perspective shadow maps. Of course the silhouette has more polys than the backface culled model but I still think that vertex shader shadow volumes aren't that bad.

PH
10-19-2002, 07:17 AM
The models have between 2000-6000 polygons. Interview with JC ( from post-E3 ),
http://www.beyond3d.com/interviews/carmackdoom3/index.php

While creating the shadow volume geometry is a lot of work for the CPU, it quite easy to optimize ( especially for a portal based engine ).

Quasar
10-19-2002, 07:35 AM
Hum,

(..)
While creating the shadow volume geometry is a lot of work for the CPU, it quite easy to optimize ( especially for a portal based engine ).
(..)

btw and how to optimize ? i've been searching stuff about this but haven't found anything usefull.

tks
quasar

zeroprey
10-19-2002, 11:08 AM
What does cpu intensiveness of shadow volumes have to do with portal based visibility? Unless if you mean that you can easily cull away nonseen shadows which is about the best optimization i think. What can be better than not doing it in the first place? i too am curious on optimizing shadow volume cpu work.

PH
10-19-2002, 11:54 AM
What does cpu intensiveness of shadow volumes have to do with portal based visibility? Unless if you mean that you can easily cull away nonseen shadows which is about the best optimization i think.


No, not cull away "non seen shadows" but not create shadows for "non seen surfaces" ( except if the shadows would cast into the view, this is determined before creating the shadows ) in the first place. This can be done with other visibility determination techniques but with portals you have a point-to-area visibility where as with something like PVS you have leaf-to-leaf. I use a BSP tree for preprocessing and a "special" one for the realtime part.

I have two levels of optimization, one is surface local ( SSE ) and one is global ( portals ). The global gives you the benefit of extracting the silhouette from a given *set* of surfaces, while the local gives you the benefit of optimized SSE ( so I get the best of both worlds ). The optimizations also depend on the current viewpoint and the light position relative to the viewer.

In addition, you'll be able to reduce fill consumption far easier and to a higher degree with portals ( this is quite logical but I'll the details up to you http://www.opengl.org/discussion_boards/ubb/biggrin.gif ).
Actually, I plan on tessellating big planar surfaces a bit more which should reduce the fill consumption even more.

Edit: a few more additions


[This message has been edited by PH (edited 10-19-2002).]

jwatte
10-19-2002, 01:36 PM
Suppose you insert degenerate sliver quads between all triangles, and move verts based on whether their faces are facing the light or not, then there's no CPU work involved. The fill rate should be the same as for "optimized" meshes. Thus, what you're paying is vertex transform.

Luckily, this vertex format is only for shadow volumes, and thus needs no U/V information, tangent basis, or anything like that. You can go down to 12 bytes per vert. Which is lucky, because you'll see something like a 4x increase in vert count :-) Again, luckily, the vertex shader can be fairly short, as all you need to do is get into perspective view, dot normal with light (in perspective view again) and move the vert if it's negative (maybe even proportional to amount of negativity). Call it twelve instructions (I haven't counted). 400 MHz / 12 means 33 million verts per second per vertex pipe -- you won't be transform bound. 2 Gb/s / 12 b/vert means, what, 166 million verts per second through AGP 8x? You won't be transfer bound, either.

Honestly, if you're targeting a shader card (which I hear that DOOM is *NOT*) then I think all this beam tree stuff to efficiently generate shadow volumes on the CPU is just not necessary.

Toss in two-sided stencil, scissoring of your volume projected to the screen, and maybe the infinite far plane trick (don't know if it's actually necessary) and you should have a reasonable stencil shadow renderer in two weeks or less :-) (That includes automatically processing the geometry in the exporter) This engine should give you "exact" results just like the CPU based beam trees, as opposed to the "re-use the original mesh and pull verts based on vert normals" trick -- on the other hand, that latter trick will have less visible artifacts where vert normals don't concide with face normals and diffuse lighting gets masked off too sharply by the stencil.

PH
10-19-2002, 02:12 PM
The portal method I briefly outlined, is not related to beam trees. The BSP tree I use is only for implementing efficient point location queries, so you know what area the viewer/lights are in ( and thus which portals to consider ). That part could probably be replaced with something else. The point-to-area visibility gives you a *minimal* set of surfaces from a given point. You should still be able to use the GPU approach on those ( I have never implemented GPU-based stencil shadows but I definitely think it's a good approach ).

jwatte
10-19-2002, 04:37 PM
I agree; I was commenting on the original approach taken in Doom III (according to those public interviews).

Anyway, any modern engine should (and probably does) draw depth buffer first (maybe even with color writes disabled). That way, it doesn't matter (as much) whether you get the "optimal" set of surfaces when you go back to re-render for lighting, although if you can get more culling cheap, that's always good.

PH
10-20-2002, 04:11 AM
jwatte,

The infinite far plane trick you mention, do you mean the Pinf matrix ? If so, you can you NV_depth_clamp instead ( GF3/4 only so far ). I still haven't tried it even though it's a small code change/addition.

PH
10-20-2002, 04:28 AM
I forgot to mention, I use a *single* connectivity mesh for each entity ( the main world being one ), which is then pruned ( based on which surfaces are visible ) before a silhouette is extracted. I do this for efficiency reasons. I tried to consider each surface in isolation but the fill rate consumed was a bit too high. I might as well have a look at GPU shadows now and see if I can fit it in http://www.opengl.org/discussion_boards/ubb/smile.gif.

tayo
10-20-2002, 07:03 AM
Originally posted by jwatte:

Honestly, if you're targeting a shader card (which I hear that DOOM is *NOT*) then I think all this beam tree stuff to efficiently generate shadow volumes on the CPU is just not necessary.


Just a problem if you do vertex shader shadow volume (like nvidia demo), you always draw front and back caps, that are not necessary, so it cost fillrate.

Furthermore, for complex scenes with many shadows, beam trees are very useful to save fillrate.
For exemple, you can have a shadow volume that is totally enclosed by a bigger one.
With beam tree, you can detect this and don't draw the enclosed volume.

jwatte
10-20-2002, 09:01 AM
tayo,

With scissor buffer, you save on the fill rate if the caps aren't necessary.

By sorting near-to-far from the light, you can stencil test your stencil shadows, and thus avoid fill write cost for things completely in shadow.

Sure, you CAN spend all that CPU and optimize out the last drops of blood. But it seems to me that you'll get within spitting distance by just trusting the GPU, assuming you encourage it to do the right thing by stroking it just so.

PH
10-20-2002, 09:15 AM
By sorting near-to-far from the light, you can stencil test your stencil shadows, and thus avoid fill write cost for things completely in shadow.


That's a very interesting idea. Have you tried it out ? That would really work as far as I can see..brilliant idea actually.
So you can save the stencil write but how much does that affect performace ? Are the reads not the most expensive ?

zeroprey
10-20-2002, 10:57 AM
PH,
Yeah thatís what I meant I guess I just said it funny. I too am using portals to decrease the amount off objects and shadows I have to deal with. Instead of having a specific world mesh we just treat everything as regular meshes and build the world out of many vertex array meshes. These can have there own texture etc. This way we save on redundant meshes. For our determination of which sector the lights/camera is in I just save where it was last and see if it passed through a portal thus I donít have a bsp of any kind. I currently implementing the visibility determination of objects, lights and shadows so I cant really comment on how well it works. Because everything is a mesh thereís no need to have different ways to generate shadow volumes. Your global shadows your saying to find which surfaces of the whole world you need to deal with and just generate the shadows from those? Isnít that kind of excessive? If your world is highly detailed or highly tessellated youíd basically be determining the visibility of every triangle right? Or maybe you group them together to help this?

jwatte,
I read somewhere maybe his plan or a interview, donít remember where, that carmack had trying vertex shader shadows and found they gave him almost identical performance. He said that heís staying with cpu made volumes because of a optimization they can do in certain cases. I could only guess this might be no caps for the shadows the camera is definitely not in. After I read that I decided not to bother testing them if they did the same as what I already coded but now youíve changed my mind. Iíll give them a try.

The talk about beamtrees in doom doesnít really have to do with silhouette extraction. My understanding is that heís using that only for static shadows+static lights. This really means he does the extraction preprocessed and optimizes the volume with beamtrees. Of coarse you still want to know if you have to draw it or not regardless of whether its static or dynamic.

I donít quite understand what you mean by stencil test the stencil volumes. How do you do this? Could you possibly give a basic run down of the algorithm?

PH
10-20-2002, 11:34 AM
zeroprey,



Instead of having a specific world mesh we just treat everything as regular meshes


That's not such a good idea, who's "we" by the way ( just curious http://www.opengl.org/discussion_boards/ubb/smile.gif ). Here's why it's not a good idea:

I treat all surfaces identicle ( they're really just meshes, each capable of having its own shader ) but if I extract the silhouette on a per-surface basis, more fill is used. Imagine a long corridor with several wall sections. These are logically connected and if you extract the silhouette from them as a group, the generated volumes will be more efficient. If you extract the silhouette for each wall section by only looking at the surface in isolation, you get extruded polygons *between* the walls ( very inefficient ). I also noticed a few artifacts in certain cases when doing that, pixels that were "double tested".
I don't have an explicit world mesh, but an explicit per-entity connectivity mesh just for shadows. Rendering the surfaces is done with entirely different data ( with matching vertex position of course ).

Regarding beam trees: I still think it's a very good optimization for static surface/light pairs. It's a lot of work to create beam trees robustly ( mine isn't 100% robust at the moment but it generates a 100% efficient volumes ).

The trick that jwatte mentions would require treating surfaces in isolation ( as I see it ), so I'm not certain it will be a win. It might not even work ( you can't efficiently sort a deformable mesh, so I can't see how the sorting would work ).

Anyway, that is my current approach which has been the most effective one I have tried so far. I have some other things implement as my approach is not optimal ( just an additional level of culling ).

zeroprey
10-20-2002, 01:31 PM
PH,

hehe yeah I kind of exchange I and we. Its me and I think his handle here is Mazer. I am working on the shadows and visibility system so sometimes I use I.

Anyways, you have a very good point. I thought before to disconnect the rendering data from the connectivity data but I couldnít figure out what I'd do for the visibility. How do you treat the visibility of a continuous surface? I currently treat each mesh as one object with a bounding volume and test if itís in whatever frustum I'm testing. What exactly do you use if everything is connected for world data? You also mention a per entity shadow connectivity. Since for the world geometry would all be connected and closed what constitutes an entity? I suppose you could use the same per object visibility and just test those tri's for facing but use the big connected structure to make the volume. I'd really have to think about it more. btw, I have seen some of those artifacts but I thought they were more because our data was bad. I donít have the vis system completely coded yet. A big parts still in my head http://www.opengl.org/discussion_boards/ubb/smile.gif Itís hard to judge right now whatís an algorithm problem and whatís just a bug.

And just out of curiosity do you use the vertex shader shadows or cpu?


[This message has been edited by zeroprey (edited 10-20-2002).]

jwatte
10-20-2002, 09:09 PM
Actually, I think that depth-from-the-light thing won't work right, or at least any more efficiently than just rendering the volume for each mesh. The idea was to work with different bits of the stencil for different depths, but you run out too quickly. Or you could "bake" the volume into the highest bit once you're done with it for one mesh, but that requires another full-screen fill. So forget I said anything about that :-)

Anyway, vertex shader volume extrusion (without color or depth writes) after first painting depth (without color writes) with appropriate scissoring really ought to get you far enough, if you're into the stencil shadow thing. I'm still partial to shadow maps, though.

gaby
10-20-2002, 11:53 PM
My question is : what were this engine if he used perspective shadow mapping, wich is, un moste case, a good method in indoor scene where mosts of lights are spots ?
The advantage of perspective shadow mapping is to avoid the shadow volume computation and rendering, to adapt the depth map definition to the power of the GPU while preseving a very good shadow edge quality, and to use high definition model while the technic of bump mapped details is avaible too...
What do you think of that ?

Regard,

Gaby

MichaelK
10-21-2002, 01:36 AM
http://mywebpages.comcast.net/chuk/Doom/

This link doesnt work!

MichaelK
10-21-2002, 02:41 AM
Arrghh! I want to see these screenshots too!

dorbie
10-21-2002, 03:59 AM
Maybe he got whacked on copyright. They were scanned for a magazine.

Some particularly good high res versions of the available shots can be found here:
http://doomworld.com/doom3/screenshots/

BTW I'd guess the soldier and scientist are high resolution meshes because this is the intro sequence with a controlled load and they're showing off with slow controlled closeups, there appear to be fewer polys in the ingame monsters, but it's difficult to judge accurately.

[This message has been edited by dorbie (edited 10-21-2002).]

Relic
10-21-2002, 04:39 AM
Let's see how long it takes until these are whacked then: http://www.thedigitalresource.com/images/doom3/

Nakoruru
10-21-2002, 04:41 AM
gaby,

Perspective shadow maps do not work with 2D shadow maps if the light is in the view frustum. Unless you can use cube map, it is best seen as the solution for shadows cast by directional lights.

For this reason I want floating point cube maps!