PDA

View Full Version : UnrealEngine 3



Jan
04-15-2004, 07:13 AM
Take a look at this video:

http://files.worthplaying.com/files/modules.php?name=Downloads&d_op=viewdownload&cid=7 93 (http://files.worthplaying.com/files/modules.php?name=Downloads&d_op=viewdownload&cid=793)

When he talks about the "displacement mapping" he actually says, that it is a pixel-shader-Effect of "Parallax Mapping". Isnīt displacement-mapping purely done by a vertex-shader?
So might it be possible, that the UnrealEngine 3 actually uses that fake displacement-mapping effect which was discussed around a month ago on this board? And if so, why would he need ShaderModel 3.0 for this?

I think the video is very impressive, although the high-dynamic-range stuff wasnīt presented in all its glory.

Jan.

Jens Scheddin
04-15-2004, 11:52 AM
LOL :)
so they are using the parallax mapping method described by mogumbo? if this is really the truth then they should at least mention from where the idea of this tech came. shame on epic!

dorbie
04-15-2004, 01:53 PM
Displacement mapping as a concept is not exclusive to vertex shaders. I think in this case it's a grey area, it's semantics. It is what it is, it probably qualifies as some sort of displacement mapping although it has problems especially at silhouettes. It wouldn't have been too difficult to put in.

FYI many people are throwing this in their games because support is relatively easy and example code is out in the wild. I'd be surprised if Doom 3 didn't add it.

I've even seen demos of popular middleware that now has a demo that implements Parallax Mapping they threw in there after reading the OpenGL.org thread.

P.S., there's absolutely no shame on Epic. They use many techniques published by others and the post on opengl.org was not the first publication of the technique, as was discovered in the thread.

Their demo was glorious, the detail in their characters and gothic level design was inspirational.

Jens Scheddin
04-16-2004, 07:02 AM
Originally posted by dorbie:
I'd be surprised if Doom 3 didn't add it.Ok, but i don't think we will see this in Doom3 because it might look a bit strange with stencil shadows.

Jan
04-16-2004, 09:27 AM
Actually, what kind of technique do they use for shadows here? It looks great.
It could be depth-shadows, but those have a lot of problems regarding quality. So either they use insanly high resolutions, or they use something else.
But stencil shadows seem to be out of questions, because those are soft. Any other thing they could be using?

BTW: Does anyone know, if we will get depth-cubemaps sometime? I really donīt think depth-maps are that useful, as long as we have to use 2D textures.

Jan.

davepermen
04-16-2004, 09:47 AM
Originally posted by Jan:
BTW: Does anyone know, if we will get depth-cubemaps sometime? I really donīt think depth-maps are that useful, as long as we have to use 2D textures.with full floatingpoint texture support, it isn't a that needed feature. the quick depth-comparison, that the depth textures supported, can't be used directly anyways as you can't feed an interpolated texcoord as depth value. instead, calculate radial depth in the fs, and store it in the fp32 cubemap, and reuse that..

dorbie
04-16-2004, 11:46 AM
It looks just fine with stencil shadows even without zbuffer modification in the fragment shader which you might even be able to use with it.

As for the Unreal 3 engine, it uses both stencil shadows and image based shadows. It had audio the guy mentioned this. They seem to be able to mix & match them easily.

SirKnight
04-16-2004, 04:31 PM
All I've got to say is freakin wow! That has to be one of the most awesome real time 3d gfx renderings I have seen in a long time. Just when I thought doom 3 and Half-Life 2 looked awesome, UnrealEngine 3 just owns. :D You know when the original Unreal was released my first reaction to the graphics was "omg these graphics are UNREAL!" Kind of ironic that the game happened to be the word I used to describe how good it looked. ;) Since then I really couldn't say that again in any unreal based game. Sure each one looked better than the last but only by a small bit. UnrealEngine 3 gave me the exact same impression as when I seen Unreal 1 for the first time "omg these graphics are UNREAL!" :eek:

I was kind of suprised to hear the guy speaking (was that Sweeney?) mention they are using shadow volumes there. I figured since they were all nice and soft they were using some kind of hardware shadow mapping or something. Cool! Also seeing they are using the parallax mapping is really cool. It's an easy effect to do yet so darn cool looking. Good stuff.

-SirKnight

eeeko
04-17-2004, 01:46 AM
It was something wrong with the download there so the file wasn't available. I find another file, filmed with a handheld video camera by "DemoCoder", is it the same? I didn't almost see anything due to bad colors and compression but the things I saw was really stunning! The hobby coders are getting more and more seperated by the big teams because all this effects must take months to create, but it looks bloody brilliant!
Would really wan't to see all this in real time, that would be almost UNREAL! :-)

dorbie
04-18-2004, 11:47 AM
I think the biggest gap is the content and tools to make it. The coding is attainable in a lot of ways but you need a real mix of talent, great tools and a lot of time to create anything approaching this quality.

plasmonster
04-18-2004, 12:09 PM
Agreed. I remember when the resource package for a game would fit into a thimbal (Quake 1). Now, a typical artwork package is truly astronomical
in size, 6 gigs for UT2004, or so. Oh well... I guess my barn yard art will have to do for now.

edit:
I've always had a profound respect for the artists. I don't care how much magic you weave into your engine, it's all going to look like monkey spunk without them.

eeeko
04-19-2004, 11:51 AM
When I said that the hobby coders are getting more and more separeted from the big teams due to developing time I also meant developing the tools. I presume that most high end engines comes with a complete set of new tools to make use of all it's new features and therefore is a part of the engine development.
Portal: I also share your respect for the artists. Just imagine if all games would have "coder art" in them, that's a scary thought! :-D

dorbie
04-19-2004, 12:45 PM
Well I don't like the term coder art, it's a dismissive conceit, I'm a coder but I've generated some pretty pictures in my day with my own art that rivaled anything artists were doing in the same time frame. The best teams and individuals have a mix talent that works together IMHO. This cuts both ways, the best artists also have a technical grasp of their subject. I've seen both programmers and artists struggle because of deficiencies in other disciplines that they really need to get to grips with to reach their potential.

Some graphics coding requires an artistic eye for example intergrating environmental effects with other effects and 3D modelling for real-time systems is a highly technical discipline.

Tom Nuydens
04-19-2004, 02:22 PM
Originally posted by Jan:
But stencil shadows seem to be out of questions, because those are soft. Any other thing they could be using?I was thinking penumbra wedge shadows, but I must admit I haven't studied that technique closely enough to know whether or not it's feasible for scenes of this complexity.

-- Tom

knackered
04-20-2004, 02:27 AM
How in gods name can you tell if the shadows are soft or not through those compression artifacts?

Splinter Cell:Pandora Tomorrow, Far Cry, Prince of Persia:Sands of Time.
These games appear to do much more than this unreal3 demo, appart from the parallax mapping maybe.

Ysaneya
04-20-2004, 02:51 AM
I agree, with the incredibly bad compression artifacts it's hard to say anything about the shadows. If they used wedge shadows, the sharpness of the shadow would be dependant on the occluder-to-light distance, and although it's hard to say from the videos, i'm under the impression that's not it. It more looks like antialiased cube shadow maps, but i might be wrong.

Y.

eeeko
04-20-2004, 02:53 AM
dorbie: I'm sorry if I offended you in any way, I was trying to use humour and ironi. Perhaps I failed. :-D

Tom Nuydens
04-20-2004, 05:17 AM
My theory isn't based on the video at all, it's based on the audio :)

Sweeney says the shadows are soft, and while I agree you can't really tell from the video, I'll take his word for it. He also says the shadowing is stencil-based. The combination of these two made me think of penumbra wedge shadows. I don't have any better arguments to support this theory, it was just a thought.

-- Tom

Jan
04-20-2004, 05:33 AM
Originally posted by Tom Nuydens:
My theory isn't based on the video at all, it's based on the audio :)

Sweeney says the shadows are soft, and while I agree you can't really tell from the video, I'll take his word for it. He also says the shadowing is stencil-based. The combination of these two made me think of penumbra wedge shadows. I don't have any better arguments to support this theory, it was just a thought.

-- TomYeah, me too. Well, i donīt think the compression is THAT bad (you can see it, when the monster walks in front of the projected light), but he also says they are soft :-)
And, yes, he actually says that he uses stencil-shadows, but it took me three or four times to understand the "shtengzill" or however he pronounces that!

Jan.

dorbie
04-20-2004, 07:59 AM
eeeko, I'm not offended, when I am you'll know :)

Nuydens, I though Sweeney said they had both soft image based shadows and sharp stencil based shadows. He did say the character shadows were soft but I don't think he claimed that specific shadow was stencil based. I think he was very clear saying they have both soft and sharp shadows in the scene that are image and stencil based and of course there's no reason why these can't mix & match. Stencil test can play with texture modulation of the results. It's a pragmatic approach that makes total sense you just get squeezed on texture units but you have plenty of those now and newer hardware you can replace texture units based operations with shader instructions with mixed results.

Korval
04-20-2004, 09:17 AM
These games appear to do much more than this unreal3 demo, appart from the parallax mapping maybe.Parallax bumpmapping, though, makes a big difference.

knackered
04-20-2004, 12:08 PM
Not as much as projective textures (even through smoke), image space glows, depth of field, incredibly efficient terrain/vegetation LOD to achieve incredible panoramic views in realtime etc.etc. Only a programmer will really notice the parallax bump mapping, I suspect - it's minor detail compared to the other dramatic techniques I've mentioned.
I get the impression that most people on these type of forums don't actually see much contemporary graphics technology unless someone posts a grainy mpg.
Buy or download some modern games. Ask your mum to increase your pocket money, SirKnight.

dorbie
04-20-2004, 01:51 PM
No other game published yet even comes close to the quality and visual complexity of that Unreal 3 demo. I agree parallax mapping usually has minor visual impact but it is easy to implement with dependant reads for all the initial coordinate generation, it's no biggie but it's nice to have. Unreal 3 and Doom 3 seem to be the only engines that truly do accumulated lighting with shaders and shadows correctly. You've got to take into account the database as well as the rendering but even looking at the rendering alone the quality of rendering & lighting in Unreal 3 wins IMHO (not comparing with Doom 3 here). I just played Far Cry and it was awesome, great foilage & LOD management over long distances and a few nice point features like water fresnel reflection/refraction transition but nothing as profound as unified lighting.

No other implementors really seem to have bought so completely into the unified lighting theme, they just don't get it, yet. It's not enough to say "we have bump mapping" or "we have parallax mapping" or "we have soft shadows", it's not about point features, it's about rendering them seamlessly and IMHO this has huge implications for the ease with which you can generate the content. Unreal 3 and Doom 3 clearly do this.

One interesting difference I thought was the shaders in Unreal 3 vs Doom 3, it seems that the iridescence shaders etc Unreal 3 indicated a departure from the monolithic lighting equation approach of Doom 3, that's an interesting difference and it seemed to work well in the context of the lit shadowed scene.

Jan
04-20-2004, 03:40 PM
Well, Unreal 3 uses deferred shading. That makes it a lot easier to use a real shader-system, meaning, that each object can have itīs own shader (for example for proceureal textures), because rendering the objects and lighting the world is completely decoupled. This can add a lot of detail and athmosphere to a game, because everything can actually look exactly how the designer wants it to look like.

Doom 3 on the other hand has certainly a lot of problems to really support such a system, simply because the complete lighting equation always has to be in a shader. Because of the restriction how much input data you can use (and of course output again) this restricts it quite heavily.

However comparing Doom 3 and Unreal 3 would not be fair. Doom 3 is state of the art (at least in games) and is the best there is right now. Unreal 3 will be state of the art in a few years, but up to now itīs more like a future tech demo.

I donīt really wonder what John Carmack will have on screen, 3 months after Doom 3 has shipped...

Jan.

Korval
04-20-2004, 04:39 PM
Only a programmer will really notice the parallax bump mapping, I suspect - it's minor detail compared to the other dramatic techniques I've mentioned.You're kidding, right? For any significant bumps, the difference between parallax and no-parallax is like the difference between no bump mapping and bump mapping. The detail actually starts to come off the surface with parallax.

And you definately don't have to be a programmer to see this.


Buy or download some modern games. Ask your mum to increase your pocket money, SirKnight.Such as? Most modern games don't look too great.

Also, most modern PC games tend to be pretty crappy to me. I refuse to buy crappy games, so I'm certainly not going to reward some dev studio just because they made interesting-looking crap.


One interesting difference I thought was the shaders in Unreal 3 vs Doom 3, it seems that the iridescence shaders etc Unreal 3 indicated a departure from the monolithic lighting equation approach of Doom 3, that's an interesting difference and it seemed to work well in the context of the lit shadowed scene.The idea that a single, monolithic shader should be used for all materials is definately a poor choice for ID. While certainly the real world works that way, shader tech and performance isn't ready to build the single "world-shader". Indeed, even non-interactive rendering doesn't use a monolithic "world-shader". Such systems typically build shaders as needed for the particular circumstance.


Doom 3 is state of the art (at least in games)State of the Art? Maybe for last year, but HL2 takes it to a new level.

And, while I recognize that this is a graphics forum, it is important to point out that HL2's engine does more than make pretty pictures. The physics/breaking system alone is capable of creating many gameplay opportunity that other games just won't have.

dorbie
04-20-2004, 05:17 PM
No accounting for taste :-), I don't think Half Life 2 rendering is up to the standard of Doom 3 or Unreal 3 but I'll need to wait for the game to come out to be sure. In some respects it's a matter of taste, it looks like a different approach from the others. As for gameplay yes it has some interesting features but that doesn't mean the other games won't, time will tell.

As for deferred shading, it's not clear that Unreal 3 uses deferred shading, in itself it doesn't imbue the scene with any additional quality, it's just a rendering technique. Moreover I can see how deferred shading could cause problems with multiple light sources with independent shadowing for each light. Deferred shading would cause way more problems and not necessarily solve any. It's one thing to make a claim about deferred shading helping quality but quite another say how or why. I'm just not buying that. I can see how it is a requirement for something like refraction and other related effects but that's a given and commonly used today that's not generally what people mean by classic deferred shading. It's really only an 'optimization' that stores lighting and shading parameters and then does screen space lighting once per pixel once visibility has been determined.

knackered
04-20-2004, 10:30 PM
Don't believe the hype, Korval. If you know anything about programming you'll realise that most things said about HL2 are hype, in other words exaggerated.
"physics/breaking system"!
I've played with the beta - it's a bollocks marketing blag. Pre-defined breaking points, that's basically a script for how an object breaks apart, if the designer could be bothered writing one...
Every single game I have played in the last 6 months has a realistic rigid & soft body physics 'engine'. It's no big deal, it adds something to the gameplay (I refer you to Far Cry, in which it is used to the best effect).
If you can't justify buying the games, then just download them and call it research. You're missing out on some very cool stuff being done using some really professional assets - far more impressive than these chipset vendor demo's.
Oh, and get an xbox.

Jan
04-21-2004, 01:38 AM
Sorry, i thought he was saying, that he uses deferred shading, but he doesnīt. However all he says makes it quite certain, that they do. For example he says, that EVERY pixel is per-pixel lit. If he wasnīt using deferred shading, he would certainly use a light-lod, meaning simpler lighting for far-away objects.

Anyway, hl2 looks nice, but in terms of RENDERING it is very hard to compare it to Doom 3. Doom 3 definetly uses the more advanced technology. However the Doom engine and the half-life engine target for completely different types of games. Half-Life is supposed to show much bigger levels which a Doom-like engine would not be suitable for.
Therefore HL2 still uses "old-fashioned" methods, such as lightmapping, but i donīt think this is bad, itīs just completely different.

And, yes, Valve makes a LOT of hype. Just yesterday i read their marketing paper about their engine (for people interessted in licensing it). First i was surprised, why they talk in 2 lines about some interessting technology, but use 6 lines to describe their simple modelviewer?? For a mod-maker this might be interessting, but for someone who wants to license it? Actually the information given was very imprecise and there was not very much at all.

And additionally they claimed to have "realtime dynamic radiosity lighting" !!! WTF ?
Certainly they meant "realtime framerates, static lights through radiosity and dynamic lighting as everybody else".
Because in the next sentence they described, that they can split level-calculations over a network onto several computers to "dramatically" increase level-precalculations.

I really look forward to hl2, but i would like Valve to simply stop all that blabbering and actually get the thing done till 2005.

Jan.

Licu
04-21-2004, 02:24 AM
Regarding Unreal 3 shadows, they use stencil shadows combined with a screen space filter to make them soft. Depth buffer or vertex programs can be used to control the blurriness over the distance. The technique is pretty well known now (many of us come with the same idea independently) and it is described even thru these threads. Regarding image based shadows, probably you are referring to cubemap masks that can be applied ?around? pixel light sources. Nothing new again. Same with parallax bump mapping, known technique.

dorbie
04-21-2004, 09:30 AM
Yep pretty obvious they're using cubemap projections for sources and there's nothing stopping you combining that with stencil shadows or additional textures. As for depth map convolved stencil results that's not a given. From the eye it's more like a depth of field effect. The depth map convolution required to do soft stencil shadows is depth *from the light*, not from the eye and would add a lot of complexity. The stencil test is a final pixel operation and therefore not accessible in the fragment shader so multiple taps for example are out of the question. You therefore cannot convolve stencil results until you've tested them to a buffer at least until until we get programmable final pixel operations (i.e. programmable blendfunc, stencil, zbuffer, hardware) and it has to be convolved before the modulation of the shader. This would require the use of several separate output buffers for intermediate results (eye depth and stencil results and possibly even convolved stencil results and/or shader results). Some related ideas have been posted, mainly by tooltech but they're not exactly the same as the ones you infer from your description, it's mainly a convolution of multiple stencil tests although the stencil penumbra volumes are cleverly generated.

OTOH with NVIDIA's flow control instruction support you can do a lot that you couldn't before and could even perform a *stencil like* shadow test in a fragment shader.

If they are doing light depth convolved shadow tests that would be an impressive effect.

P.S. it's not actually depth from the light (which is what projective image based approaches get you), it also needs to account for the delta depth between the occluder and the shaded surface.

jwatte
04-21-2004, 08:03 PM
If he wasnīt using deferred shading, he would certainly use a light-lod, meaning simpler lighting for far-away objects.
Why? With early Z tests, you shade each pixel on the screen exactly once (times the number of passes needed to fit all your lights into the shaders). Thus, you don't necessarily use any kind of LOD. If you're a little careful about your lights (i e, use a non-physical, limited range for example) then you shouldn't have any problems on modern hardware.



account for the delta depth between the occluder and the shaded surface
I believe this is much more important than depth from the light. If you're using shadow maps, it's very easy to get the depth-from-occluder, and you can use that as an offset into a screen-space texture to determine a filtering LOD, to figure out how bright to make the light contribution of the pixel, for example. Or just use a kernel on the depth map, making the kernel wider (or texture blurrier) the further away you are.

You can conceivably get depth-from-occluder using stencil as well, by passing in coordinates as an interpoland when rendering stencil volumes, and writing those coordinates to the frame buffer, although overlapping volumes would need special care and attention (such as a "min" or "conditional" blend mode, perhaps?).

pkaler
04-22-2004, 08:43 AM
Originally posted by licu:
Regarding Unreal 3 shadows, they use stencil shadows combined with a screen space filter to make them soft. Depth buffer or vertex programs can be used to control the blurriness over the distance. The technique is pretty well known now (many of us come with the same idea independently) and it is described even thru these threads.Do you have any references? Preferably a paper.

Arath
04-22-2004, 10:16 PM
I think licu is talking about the "smoothies" technique.

You can find a paper here :

http://graphics.csail.mit.edu/~ericchan/papers/smoothie/

AFAIK, on a 9800, this technique is "eating" too much power, but surely it will be interesting for next generation of hardware.

Cheers
Arath

Eric Lengyel
04-23-2004, 12:10 PM
Hi dorbie --


No other game published yet even comes close to the quality and visual complexity of that Unreal 3 demo...

Unreal 3 and Doom 3 seem to be the only engines that truly do accumulated lighting with shaders and shadows correctly. You've got to take into account the database as well as the rendering but even looking at the rendering alone the quality of rendering & lighting in Unreal 3 wins IMHO (not comparing with Doom 3 here)...

No other implementors really seem to have bought so completely into the unified lighting theme, they just don't get it, yet. It's not enough to say "we have bump mapping" or "we have parallax mapping" or "we have soft shadows", it's not about point features, it's about rendering them seamlessly and IMHO this has huge implications for the ease with which you can generate the content. Unreal 3 and Doom 3 clearly do this.
The C4 Engine has had a unified lighting model that does everything "the right way" for about two years now. Check out the demo (lacking decent art) at:

http://www.terathon.com/c4engine/downloads/

-- Eric Lengyel

dorbie
04-23-2004, 01:05 PM
Great stuff, OTOH, no offence Eric but that ain't a title. It's barely a passable demo. I'm sure the tech is great but it has to have content, even the content it has doesn't show the tech features I'd expect to convince me visually of the claims. Multiple moving light sources with overlapping shadows, emissive material properties and specular with gloss maps and of course the obligatory bumpmapped reconstruction of geometry on simplified meshes with plenty of detail in the scene and on skinned characters and maybe with & without ambient in some shots. Not all essential but mostly what you'd need at the very least to show it off.

There have been plenty of demos that do the right thing in a simplified setting, for example Humus' stuff. Most of us have written them. A shipping title is a different thing, it takes a while to build the content and game.

Carmack gave the first Doom3 demo years ago, they've taken this long to optimize and produce a game.

I'm not saying this is going to be surprising or novel when it arrives, but I'm just not buying the line that it's yesterdays technology when nobody has delivered it effectively in a title yet and I haven't seen anything better. I'll admit though I may be missing one, and of course Futuremark did implement something pretty albeit with limited scope & few optimizations.

Part of this is having the courage of your convictions too, often the tech doesn't drive the design at companies and the art team tries to solve yesterdays problems or has control to the point where they can veto a rendering technology. Some companies are incapable of driving technology for this and related reasons.

jwatte
04-23-2004, 01:27 PM
it takes a while to build the content and game
I think I nominate this as Understatement Of The Year.

You cab *BUY* an engine...

dorbie
04-23-2004, 01:40 PM
Some more pearls of wisdom:

Some engines are better than others.

Good engines have great content paths.

Some companies would rather fail than buy an engine :-).

Some engines aren't.

knackered
04-23-2004, 03:15 PM
Eric, that demo is appauling. Your application architecture is fantastic, but your demo really lets it down. All that programming, all those features aimed at giving realism to your graphics - all completely overlooked when playing that demo. YOU NEED TO CREATE A BETTER DEMO. It's neccessary, in these shallow times.

zeroprey
04-23-2004, 04:55 PM
I'd say the very fact that they showed multiple shaders and said they had a shader editor and everything shows they are not using deferred lighting. With deferred lighting you need to apply the same shader to the entire screen.

As for the shadows it actually suprised me when he said it was stencil based shadows. Links here for two such approachs:
Ysaneya\'s shadows (http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=008816)
Penumbra Wedges (http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=011267)

As for the smoothie shadows, this is what I thought they were doing after I read a interview saying they were doing soft shadows. But that cant be the case now since he says its stencil shadows. What do you guys think of smoothies?

smoothie paper (http://graphics.csail.mit.edu/~ericchan/papers/smoothie/)

I was thinking of implementing this myself. I would think it would be faster than the two other methods. Only disadvantage is the umbra doesnt shrink. You'd have to use a cube of them to do omni lights and it doesnt do alpha textures like normal shadow buffers but it should do all of what the others do. I've been trying to think if there were some way to fix the umbra shrinking problem. Havent come up with anything.

Eric Lengyel
04-23-2004, 05:52 PM
Originally posted by knackered:
Eric, that demo is appauling. Your application architecture is fantastic, but your demo really lets it down. All that programming, all those features aimed at giving realism to your graphics - all completely overlooked when playing that demo. YOU NEED TO CREATE A BETTER DEMO. It's neccessary, in these shallow times.Best I could do with ABSOLUTELY NO ARTISTS. It's meant to demonstrate technology to those who are capable of understanding it. By no means do I claim it to be a polished title. I'm working on showing off the tech better...

Eric Lengyel
04-23-2004, 05:58 PM
dorbie --


Great stuff, OTOH, no offence Eric but that ain't a title. It's barely a passable demo. I'm sure the tech is great but it has to have content, even the content it has doesn't show the tech features I'd expect to convince me visually of the claims. Multiple moving light sources with overlapping shadows, emissive material properties and specular with gloss maps and of course the obligatory bumpmapped reconstruction of geometry on simplified meshes with plenty of detail in the scene and on skinned characters and maybe with & without ambient in some shots. Not all essential but mostly what you'd need at the very least to show it off.
I never claimed that it was a title of any kind, just a art-less tech demo.


Multiple moving light sources with overlapping shadowsDid you shoot the lights?

knackered
04-23-2004, 06:06 PM
Cool it brother.
For a start it's not so much to do with the art assets as it is to do with the way your demo is arranged. Took me a while to find that room with the swinging lamp, and that led me to ask the question "why is this lamp in this tight space?": my only answer was that you were hiding a limitation of your renderer, ie. it's speed and efficiency. You not only have to demonstrate what your renderer can do, but how fast it can do it and what cpu load it implies. Your demo should have more effects in the same place, to show that they can be used together with no significant cost.
You gotta sell your ass, boy.
By the way, your design chart is pretty vague. I assume that each module you detail is dependent on modules further up the hiearchy? In which case, a developer would have to resign himself to a 100% C4 engine path, which is too much of a risk...he may have his own libraries that he would like to use - you may do some things right, but you can't be relied upon to have done everything right...you're not perfect. A developer would want to have the choice of just using your renderer, for instance.
Just some thoughts, trying to help really. If the C4 engine is as good as you say, and has been around for over 2 years, I should have heard of it by now in the commercial sector, but I haven't. You're doing something wrong.

dorbie
04-23-2004, 06:57 PM
I just ran it on better hardware with better results. You need to have overlapping lights/shadows though. The projectiles don't cast shadows, there's really nothing visual in the demo that suggests it's doing the 'right thing' TM, and of course there's the art like you said, just bump mapping doesn't cut it it's gotta be bump mapped reconstruction of geometric detail but I don't mean to bash it. Technically there are enough differences it the art path to make that a concern.

Nothing in the scene moves through the lighting for example a monster and none of the 'interesting' lights move.

Those early Doom 3 demos showed a monster wandering thorugh complex lighting for a reason. Just take it as food for thought, I'm not trying to bash it, just be constructive.

Eric Lengyel
04-23-2004, 07:39 PM
Hi guys --

There's a much bigger room in the A-shaped building where there are two swinging lights. If you stand in one of the doorways to this room, you're actually getting hit by four light sources simultaneously. The light in the domed building being in a tight place has nothing to do with engine limitations.

Bump-mapped specularly is in there and should not have been too hard to find. Gloss mapping on top of that is also in there, but only on the main character, which you can't see except in the reflective floor in the back of the A-shaped building.

BTW, in the room with the reflective floor (which, incidentally, is another place where you can see two of your own shadows), there is a secret passageway -- shoot the walls.

The grenades actually cast shadows. The rockets don't because they are usually obscured by smoke, but it would be trivial to turn shadows on for them.

The C4 Engine hasn't been available for 2 years, but the features that I've mentioned have been functional for that long. I'm not really super-interested in licensing the technology -- I'm primarily interested in making a game. Need to find some good artists who are willing to jump into a startup, though.

I would love to be able to have a monster running around with the bump-mapped detail, gloss mapping, etc, applied to it. The tech is all there. The problem is that I have no budget to get good art done. I know artists who would be willing to help, but they have full-time jobs and no extra time. If you have any solutions for this problem, please feel free to educate me.

Also, avoid running this thing on ATI hardware for the moment -- their drivers suck.

-- Eric

John Pollard
04-23-2004, 11:40 PM
Moreover I can see how deferred shading could cause problems with multiple light sources with independent shadowing for each light.Deferred shading is no different than the traditional per pixel pipe in this regard. You simply interleave your shadow volume code between your lighting code. Or, read from your shadowmaps in the lighting shaders. Using a deferred system doesn't change this.


Why? With early Z tests, you shade each pixel on the screen exactly once (times the number of passes needed to fit all your lights into the shaders). Deferred shading definately wins at some point. When you've got 10 lights that touch 300k worth of geometry each, you don't want to be drawing those batches of 300k triangles 10 times. Thats insane. Too many vertex buffer, material, etc changes as well. The average shader length for a deferred light is about 25-40 instructions, and you can draw a 200 triangle sphere to represent it. That's going to be cheaper than drawing a batch of 300k triangles where the shader is about 10-20 instructions. With 6800's new stencil early out tech, combined with traditional zbuffer early out tests, you can use the stencil buffer to mask the part of the sphere that actually touches the world, and get serious performance.


I'd say the very fact that they showed multiple shaders and said they had a shader editor and everything shows they are not using deferred lighting. With deferred lighting you need to apply the same shader to the entire screen. The lighting equation must be the same per light (to keep things simple), but you can use whatever shader you want when writing colors out in the first pass. This is where a majority of the modifications happen anyhow, right?

If I had a level that consisted of over a million triangles, I'd definitely place my bets on the deferred pipeline.

Which shadow algo to use is still up in the air though. Shadow volumes are actually faster in some cases (especially if you use a different model, with less geometry for your shadow volumes). Shadow volumes also benefit from going into the distance, as they occupy less screenspace when this happens. Shadow maps are faster in other cases (static lights for example beg to use shadow maps).

mikeman
04-24-2004, 02:20 AM
Originally posted by John Pollard:
The lighting equation must be the same per light (to keep things simple), but you can use whatever shader you want when writing colors out in the first pass. This is where a majority of the modifications happen anyhow, right?
[/QB]In the first pass,you write out vertex position,normals and texel colors.These are standard info used by all the next passes,you can't do anything different with them.

Anyway,just some thoughts about using multiple shaders with deferred shading,using as base the tutorial in www.delphi3d.net: (http://www.delphi3d.net:)
Let's say we have 10 different shaders,for 10 different kinds of material.We can map a shader to a float value,say shader1=0;shader2=0.1;shader3=0.2 etc.We can pass that as a parameter in the first pass and store that value in the 128-bit fat-buffer.Then,we render a second pass with a depth-replacing program and we write that value to the depth-buffer.Hence,fragments that should be processed by the first shader have z=0,for the second z=0.1 and so on.We can then do ten passes for each shader and use early ztest,or possibly some extension like depth_bounds,to exclude all the fragments that are not to be processed by the current shader.That sounds like a lot of passes,but really there is not much additional work in the vertex pipeline,and the total amount of the fragments processed by the shaders doesn't increase.

John Pollard
04-24-2004, 11:21 AM
In the first pass,you write out vertex position,normals and texel colors.These are standard info used by all the next passes,you can't do anything different with them.In the color channel, you can apply EMBM, add two textures together, add an EMBM'd effect with a panning wall texture, modulate a detail texture with the wall texture, etc. The possibilities are endless. This effect (created by the artist), will then get combined with the lighting.

pkaler
04-24-2004, 01:05 PM
Originally posted by John Pollard:
Deferred shading definately wins at some point. When you've got 10 lights that touch 300k worth of geometry each, you don't want to be drawing those batches of 300k triangles 10 times. Thats insane. Too many vertex buffer, material, etc changes as well.I'm guessing it's state change and fragment program complexity that's gonna kill you, rather than the number of vertices.

Compiled vertex arrays were the super-sexy, en vogue thing a couple years ago before hardware transformations. I don't think transform is ever a bottleneck anymore.

Anyone try CVAs on modern number with a very large number of passes? With early z-reject and each pixel only being hit once, maybe there are cases where transform is the bottleneck.

Any insights anyone, driver writers?

Jan
04-24-2004, 02:02 PM
I wouldnīt be so sure about transform not being the bottleneck anymore.

It depends on what you are doing. If you need to transform your polys for each and every light in the scene, this can easily double the amount of traingles needed as when you transform everything only once.
So, even if that is still not your main bottleneck, you will on the other hand get those triangles for free, if you donīt need to transform them multiple times, meaning, that you can have VERY detailed levels, which Unreal 3 actually does have !

They said to have 1000000 triangles / polys in a scene! I call that f****** DETAILED !

All in all its not about reducing the workload of the GPU, itīs about reducing redudancy and instead do something, which has visible impact.

Jan.

dorbie
04-24-2004, 02:10 PM
John Pollard, I don't agree with a lot of what you've written, but the most exaggerated claim is w.r.t. standard rendering & lighting. A more conventional rendering path can use fragment lighting with positions of multiple lights sent once at the outset especially with the flow control in NVIDIA's hardware. Heck you could do the transform to tangent space in the fragment shader or even do the lighting in object, world or eye space, take your pick. It doesn't have to touch geometry for each light. In addition for many lights you either need a lot of framebuffer storage for local light positions and/or many screen space passes, & that's without considering issues like shadow casting. I mean you have to do exactly the same as a fragment light transformation and itteration or render light positions to the framebuffer so the playing field is level at best.

More important than all of this though is the fact that it's merely an optimization, if you can make it go faster good luck, but that's all it is, it defers shading to a post visibility determination. That's where it's supposed to be a win, I'm very skeptical it can ever be faster than a more conventional approach & the kinds of fragment shaders that can be implemented now, but whether it's true or false, it doesn't herald the next jump in visual quality which was the original issue of contention.

If it's faster prove it with code, but to me it seems absolutely obvious that with improving intruction sets and zbuffer optimizations, deferred shading is less of a win with every generation of new hardware, even as the hardware enables it as a possibility.

dorbie
04-24-2004, 02:49 PM
Eric, I ran it on a 9800 Pro and it seemed to be functional. I saw the specular but I didn't see any moving lights I saw the reflective floor but never noticed myself reflected although I did see my own shadow in places. I never fired a rocket (didn't know it was possible), just the primary weapon with no shadows.

davepermen
04-24-2004, 03:09 PM
demo works great on 9700pro, latest drivers. thanks for posting what you can do with it, wouldn't have found all those nice features at all..

the lights are great, but most fun is standing in front of the portal, and shooting trough, seeing the lights then affected:D

John Pollard
04-24-2004, 03:57 PM
I'm very skeptical it can ever be faster than a more conventional approach Sorry, I'm not trying to ruffle any feathers or anything. I'm actually using my personal experience, and thought I'd chime in for anyone that was interested. I've just been really interested on the subject of deferred vs conventional lighting myself, so this thread interested me.

I've implemented both pipes. The conventional pass per light wins where there is not many material/state changes, and not much overdraw. Deferred shading seems to win when the scene starts to approach large amounts of triangles with large amounts of overdraw, and lots of state changes. The more complex your scene is, the more types of materials the artists use, etc, the more deferred shading looks real nice, and starts to win (in my experience).

Even in situations where it's a draw, deferred lighting is really nice imho, and will be my personal choice going into the future. You don't have to have such a light centric pipeline anymore. Your primitive batching system gets really complex when you want to try to get maximum batching using the conventional way, it's all just so simple with deferred lighting.

But right now, the conventional method seems to be the best with current generation of cards, but deferred is looking really good for the future.

dorbie
04-24-2004, 10:27 PM
No offence taken. We all have experience to draw on.

You say that a conventional approach looks looks good for now but deferred looks good for the future. So all we disagree on is what's in the crystal ball. Time will tell.