View Full Version : Multipass lighting and multipass texturing
07-01-2002, 11:40 AM
I've decided to let the Doom3 thread rest in peace, although this may have fit in very well there.
The general consensus seems to be that the 'right' way to do this kind of lighting (multiple light sources with a finite number of texture and combiner units) is, to accumulate all light source's contribution in the frame buffer by additively rendering them after clearing the frame buffer to black, and then modulating the textured objects on top of that.
Now here's a good one: what do you do if you have multipassed texturing? I can't tell from the few Doom3 screenshots if there is anything of that kind, but I can imagine many cases where multipass texturing is required on top of the light. The problem here is, that the first modulative texturing pass into the frame buffer destroys the lighting values in the frame buffer, so that they're no longer available for subsequent texturing passes. Accumulating the light sources into destination alpha is one solution, but what about colored lights?
Solution for that would be, either accumulating RGB separately into the frame buffer and modulating channel by channel, or rendering the lighting to a texture and rendering a screen sized quad using this texture, modulating the multipassed objects previously rendered at full brightness.
GL 2.0 could provide an easy solution with its auxiliary buffers - any other ideas out there?
07-01-2002, 01:14 PM
The object texture is not modulated on the accumulated lighting result, it is part of each lighting pass. This is part of the 'unified' approach. Each light contribution is correctly bump mapped for diffuse and specular terms for its light source. The FINAL total contribution for each light source is accumulated in the framebuffer as they are calculated, including all texture terms. There is no final modulation after this. Where lighting takes more than one pass on a graphics card due to a lack of texture units the intermediate attenuation result is stored in destination alpha which then modulates the source fragment on the second pass before it is added to destination color as the final contribution for that light.
[This message has been edited by dorbie (edited 07-01-2002).]
07-01-2002, 02:11 PM
Originally posted by dorbie:
The FINAL total contribution for each light source is accumulated in the framebuffer as they are calculated, including all texture terms. There is no final modulation after this.
I imagine doing that would have some strange results - imagine two 100% white light sources, and an object with a uniform 50% grey texture - if the two light sources overlap in close to the same spot, you would actually end up with
r = t * l + t * l
r = 0.5*1.0 + 0.5*1.0 = 1
So, white, although the texture is only 50% grey. In other words, no lightsource could come towards full brightness if it's overlapping other lightsources, without the resulting fragment in that spot going towards white pretty quickly?
07-01-2002, 04:30 PM
No you're wrong. This is not really some unknown area, it's pretty cretain final results are accumulated. You cannot do complex lighting that handles what you're talking about with a final modulation. Maybe if you JUST had diffuse bump map you could accumulate illumination but not for diffuse + specular. Besides it's when accumulating illumination terms that you run into clamping problems, not the other way around. I can add I*bump*tex+I*bump*tex just fine, if I change that to (I*bump+I*bump)*tex, the clamping issues are actually more serious.
See this thread for more discussion:
You're right that the clamping problem would be worse if you summed first, then modulated.
However, that might be what you normally want to do anyway. For instance, suppose I had a texture that was a single color, like (0.8, 0.5, 0.1). Normally, this would look like a reddish orange.
If you do (I*bump + I*bump + ...)*tex, at maximum intensity you will just see that orange texture. However, if you did (I*bump*tex + I*bump*tex + ...) the color would vary from orange to yellow to white, depending on how many lights were shining on it, like an additive blend. Usually you only want that sort of "additive blend" effect when drawing glowing particles, not solid surfaces.
07-01-2002, 08:52 PM
But you get a problem of very artificial looking lights in the region where the total illumination should be >1. Either that or your scene generally looks very dingy. You have all these lights adding up but despite the totals the brightest you get is what's in the diffuse texture texture. In anycase the point is moot. I think it would be undesriable, but we know it's not what the engine does. Infact I think the engine might try to simulate attenuation >1 even for a SINGLE light source with post modulation bias (see the other thread). It at least does the light accumulation, that's the central tennet of the unified lighting model. Yes you get a chroma shift but generally I think that is the lesser of two evils. Without an extended precision framebuffer with a human perception model (of sorts) on the back end you'd be hard pressed to do better I think. Maybe you could fudge something with dynamic control of overall levels. I actually think the clamped accumulation of light would look quite good provided it wasn't overdone.
07-01-2002, 09:28 PM
If it's not clear why you nead attenuation > 1 for a single light, just imaging a really dark surface illuminated by a really bright light, or a better yet a surface with really dark patches illuminated by a really bright light. Without the unclamped attenuation boosted post modulation it doesn't matter how bright your light is the texture will always make the surface appear dark. You can handle this with relative ease by spending a bit or two of attenuation precision and boosting the final blended result. If the attenuation map ever clamps then it might look OK, but it'll probably look strange especially for a mottled surface IMHO.
[This message has been edited by dorbie (edited 07-01-2002).]
07-01-2002, 09:55 PM
Oops... I didn't see the other thread. My bad http://www.opengl.org/discussion_boards/ubb/smile.gif
Dorbie: I can see what you're saying.
From the things I've tried in the past (including the render-to-texture approach, which seems to hurt performance pretty badly) accumulating lights including the texturing pass seems to be the least complicated solution with the most robustness as far as multiple light sources and different materials go (everything else involves special casing and working around limitations and problems by increasing the number of necessary passes per light source to a ridiculously high number).
The only problem I have with it is the quick overbrightening of parts of the scene, but I assume long as the light sources are kept at a relatively low intensity level that may not be as big of a problem (although it makes the scene appear very dark easily).
Some random thoughts:
One way to generally brighten up the scene could be, to increase the attenuation radius of the low intensity light sources, so their contribution spreads out more, another may be to simply render an ambient term first, then accumulate all light sources on top.
An example that comes immediately to my mind that would probably cause the biggest difficulties with this solution is an outdoor scene at daylight, where you want the lighting from the sun to reach almost full intensity (imagine a directional light source simulating the sun).
Without dimming additional point light sources during the day, you'd get a white spot for every point light, say, if it's in an area already lit by the sun. If the point light source is on the unlit side of a building though, you'd want it to light that side anyway, so dimming the light isn't really the way to go.
I guess there is no real solution for that problem, unless the framebuffer had a higher dynamic range.
07-01-2002, 10:43 PM
if the other lights have some power comparable to the sunlight, it gets white yes. but normally they don't even come near to the sunlightstrength => your lights dump around at 10 to 50 perhaps and the sun is at 255..
07-01-2002, 11:00 PM
Even with a higher dynamic range you still have the dilema of what you display, or how you 'expose' the result in the framebuffer to the monitor.
Fundementally if you have many bright lights you have more light than you can show. I can see that there is merit to a DAC table which has some YUV (YCbCr)& back (perhaps) conversion for a high dynamic range image where something that's 3,1,0 in the framebuffer doesn't come out as 1,1,0 on the wires(ignoring gamma for now), but has some perceptually derived table which produces something that's still significantly red looking. In otherwords, just by way of a nasty example you might perform the clamp on luma in YCbCr color space. For now those things aren't practical in a renderer and again I think the clamping of the FINAL result as it's accumulated probably looks good where it happens. Again as an example in a game (rather than what all hardware would want to do) you need some perceptual model loopback over time, but heck people don't even get gama correction right now and screw up what could be passable antialiasing and we're discussing the subtleties of perceptually based color clamping in high dynamic range images. I'm sure if you did the "right thing", dynamic adaption with real lighting levels most people would hate it.
Beyond this detail some global adaption should be possible in a scene now, but all the arithmetic getting discussed is still linear so making it work would not be easy, scratch that it should be approximately OK with linear arithmetic due to adaption. All these bugbears will arise when we get high dynamic range framebuffers and realistic lighting. In a few years most graphics developers (and users) will be horrified at what has gone on up until now w.r.t. lighting levels.
And try making that work with AA too, unless you have post LUT fragment filtering it might be tricky. I'll be impressed when we get AA working without gamma correction (i.e. with hardware gamma == 1 or anything less than 2.5 for that matter) you realize that AA is totally broken unless you set the right gamma correction for your monitor in hardware, and few applications let you do that, if they do they screw up contrast sensitivity, but any linear arithmetic, including all the lighting discussed so far, demands that it get screwed up and gamma be set to correct for your monitor. It's an ugly business, just thinking about it makes you think it's all a damned mess because there are fundamentally conflicting requirements, maybe better hardware will solve them all in one stroke.
[This message has been edited by dorbie (edited 07-02-2002).]
07-01-2002, 11:28 PM
P.S. how about gamma compensated antialiasing? I mean the vendor who actually implements that would be seen as having the best AA bar none. The reviewers are clueless when evaluating this stuff, you'd just look BETTER than everyone else. The difference would be night and day, nobody evaluates antialiasing with hardware gamma correction cranked up to 2.5 so nobody appreciates the value of it reguardless of the quality of the underlying algorithm.
[This message has been edited by dorbie (edited 07-02-2002).]
07-01-2002, 11:53 PM
Oh dear, I just took this to it's logical conclusion. I think you need 2 gamma values. Since nobody actually uses the CORRECT hardware gamma (and in many cases you don't want to), you need a color correction gamma, and a TRUE gamma for antialiased linear weighting space. Otherwise if you have gamma compensated antialiasing you need to make an assumption about display gamma. So if the hardware gamma is set to 1 as it is on most PCs, your antialiasing really blows but many games look good because they're designed to look good on that display gamma and stuff like video and photographs (and many textures) already contains gamma correction therefore make good use of their bits for contrast sensitivity.
Now, you come along and want to make your antialiasing look good, but you need to do the arithmetic in linear display space. If hardware gamma doesn't describe that space (and we know it doesn't that's why we're fixing it) we can make a good guess (~2.5 and if hardware has correction we apply the net to get to 2.5), but we probably also want to allow that guess to be tweaked to allow antialiasing to be perfected for all displays.
OK shoot me now, but I've worked on getting this right in software antialiasing and the results are stunning. You do need to make an assumption about TRUE display gamma though and apply the net correction during filtering.
07-02-2002, 07:34 AM
Hmm... well, I guess it would be possible to make a close enuff assumption about the monitor gamma on high quality displays, but a 08/15 monitor's gamma can pretty much change slightly everytime you turn it on and off (so can the colors... when I was employed in prepress there was nothing funnier than watching some designer trying to color calibrate their monitor every other day 'cause the company had tried to save money on the displays).
I like dorbies idea of the conversion from a 'perceptual' color space like YCbCr/CIE/YUV/YIQ, but I guess until video cards can natively work in a color space like that, it'll be a while http://www.opengl.org/discussion_boards/ubb/wink.gif
Powered by vBulletin® Version 4.2.3 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.