PDA

View Full Version : Shadow Maps, multiple lights and textured entities



Devdept2
02-07-2012, 02:42 AM
Hi,

I have problems combining shadow maps and textured entities.

When doing multiple passes with additive blending to apply the shadow maps, the textured entities (with GL_MODULATE, on Texture Unit 0) become lightier than they should.

Example:

Scene with 2 lights: Light1 projects shadows, Light2 doesn't.

Pass 1:
Build the shadow map of Light1.

Pass 2:
Draw the scene with only ambient color for Light1 and ambient/diffuse/specular for Light2.

Pass 3:
Enable additive blending (GL_ONE, GL_ONE) to add contributions of Light1.
Enable ShadowMap with depth Comparison (on Texture Unit 1) and draw the scene (with its textures on Texture Unit 0).

The textured entities become lightier than they should, while non-textured entities are fine (the lit parts have the same illumination they have if I disable ShadowMaps).

Am I doing something wrong?
What is the correct way to get the correct illumination for textured entities when using additive blending with multiple passes?

BionicBytes
02-07-2012, 04:55 AM
Enable ShadowMap with depth Comparison (on Texture Unit 1) and draw the scene (with its textures on Texture Unit 0).

Do you still have blending on at this point?



Enable additive blending (GL_ONE, GL_ONE) to add contributions of Light1.
..and at this point it's all looking good? (without shadows of course)?

Devdept2
02-07-2012, 07:49 AM
Enable ShadowMap with depth Comparison (on Texture Unit 1) and draw the scene (with its textures on Texture Unit 0).

Do you still have blending on at this point?



Yes, it's the Pass 3: I enable additive Blending AND the ShadowMap texture compare and then I draw the scene.




Enable additive blending (GL_ONE, GL_ONE) to add contributions of Light1.
..and at this point it's all looking good? (without shadows of course)?

No it's not. If I don't enable the ShadowMap texture compare and I do the Additive blending pass adding the Light1 contribution to the whole scene, the entities with texture are lighter than they should.

Maybe drawing modulated textures in successive additive passes is not the right thing to do... Probably I'm adding the texture intensity more times than I should.
For example, if a pixel is in full light for Light1 and Light2, drawing it in one pass will result in the texture pixel with full intensity, but if I do it in 2 passes I would add the intensity of the texture 2 times (one in the first pass due to full Light2 illumination and one in the second additive blending pass due to the full Light1 illumination).
Am I missing something?

BionicBytes
02-07-2012, 07:54 AM
Originally Posted By: BionicBytesQuote:
Enable additive blending (GL_ONE, GL_ONE) to add contributions of Light1.
..and at this point it's all looking good? (without shadows of course)?

No it's not. If I don't enable the ShadowMap texture compare and I do the Additive blending pass adding the Light1 contribution to the whole scene, the entities with texture are lighter than they should.

Right. So you need to fix this part first before even considering the shadow mapping contribution.



Maybe drawing modulated textures in successive additive passes is not the right thing to do
I think it is. That's how my deferred engine is ultimately doing it.


For example, if a pixel is in full light for Light1 and Light2, drawing it in one pass will result in the texture pixel with full intensity, but if I do it in 2 passes I would add the intensity of the texture 2 times
Right. Think about it. If light1 adds a bright area to the scene, then a second (euqally) bright light on the same spot should double the intensity at that point.

So, with this in mind...do you still have a problem?

Kopelrativ
02-07-2012, 08:51 AM
If the problem is about saturation, then there are ways to handle it. When adding light a couple of time, all RGB channels are saturated and you only see white.

If so, consider using High Dynamic Range (http://en.wikipedia.org/wiki/High_dynamic_range_rendering) (HDR) instead. In the end you transform back into the range 0-1, using Tone mapping (http://en.wikipedia.org/wiki/Tone_mapping).

Devdept2
02-07-2012, 08:53 AM
[quote]
For example, if a pixel is in full light for Light1 and Light2, drawing it in one pass will result in the texture pixel with full intensity, but if I do it in 2 passes I would add the intensity of the texture 2 times
Right. Think about it. If light1 adds a bright area to the scene, then a second (euqally) bright light on the same spot should double the intensity at that point.

So, with this in mind...do you still have a problem?

Yes I do.
If with one pass I get the full texture intensity (which is the result I consider correct) and with 2 passes I get the double of intensity, how can I get the full texture intensity with 2 passes?

Devdept2
02-07-2012, 09:39 AM
These are the pictures of:

Lighting without textures
Lighting with textures in 1 pass (Light1 and Light2)
Lighting with textures in 2 passes (Light1 + Light2)

http://static.devdept.com/1.jpg

BionicBytes
02-07-2012, 09:44 AM
If with one pass I get the full texture intensity (which is the result I consider correct)
This is the part I don't agree with. What makes you so sure your single pass is actually correct?
The multi-pass approach is generally considered correct with additive blending. If light 1 adds a contribution of 200 (out of a max 255) to the scene, a second light may also add 200/255. The total white at the scene an these points would be 400. There is nothing wrong with this result even if it's past the max RGBA values of 255 (that's why you need to be using RGBA16F light accumulation buffers and tone mapping).

BionicBytes
02-07-2012, 09:49 AM
Ok, those pictures help put things into perspective.
However, it would be more useful to just see the effects of diffuse lighting (no ambient). Also, reduce the intensity of both lights so that when combined, they don't saturate to pure white. If you can focus the second light so it does not affect the whole cube (spot light?) then that would also help us visualise the combined effect of the lights.
From what I've seen, the two pass approach looks better because the more lights you add to the scene - the more washed out everything is going to become.

devdept
02-07-2012, 02:24 PM
Considering that Shadow Maps is today a mature approach, I guess that many developers faced this issue. Where can we learn/read more on how to combine all these factors and get a perfect looking rendering?

Is there any white paper / article on this subject on the net?

Thanks,

Alberto

Alfonse Reinheart
02-07-2012, 04:29 PM
Where can we learn/read more on how to combine all these factors and get a perfect looking rendering?

Read any material that teaches you to not use the fixed-function pipeline. It's hard to say what exactly your problem is without seeing all of your lighting parameters and such, but odds are good it has something to do with some lighting state.

Shaders are much, much easier to control.

Also, HDR is kind of important. But since you seemed to ignore that, I'm not sure how much we can help. We can only tell you what you ought to be doing and how everyone else solved it.

devdept
02-08-2012, 01:17 AM
Alfonse,

Do you mean we cannot complete perfect Shadow Maps without using shaders or HDR?

Is it possible that there is no way to combine these multiple passes without going out from the 0-1 RGB color range using the fixed-function pipeline?

Thanks again,

Alberto

ZbuffeR
02-08-2012, 05:46 AM
Do you mean we cannot complete perfect Shadow Maps without using shaders or HDR?
Of course. Shaders where invented expecting they would not be ignored ...
Even using shaders, or shaders+HDR, you will only be closer to perfection and still not touching it.

Using a linear workflow and proper gamma correction in and out is mandatory for any "close to perfection" renderer.
http://www.geeks3d.com/20101001/tutorial-gamma-correction-a-story-of-linearity/
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html
http://renderwonk.com/blog/index.php/archive/adventures-with-gamma-correct-rendering/
http://beautifulpixels.blogspot.com/2009/10/gamma-correct-lighting-on-moon.html

and lots of others.

devdept
02-08-2012, 06:33 AM
Thanks ZbuffeR,

I believed that basic shadow maps could be implemented without the need of shaders or HDR to avoid going beyond the 0-1 RGB color range...

Now at least we know we can give up trying... :(

Alberto

Alfonse Reinheart
02-08-2012, 08:58 AM
Your problem isn't shadow maps. It's multiple lights. As I said, you're probably doing something wrong in your fixed-function lighting setup code. Which would be much easier to see and much more obvious if you were using shaders instead of arcane fixed-function commands.

Devdept2
02-08-2012, 09:39 AM
I solved the problem in the following way:

During the shadowmap passes (with additive blending to add the lights contribution), the entities with textures are drawn without texture and with white material.

Then I do a final pass with the textures enabled and multiplicative blending (GL_DST_COLOR, GL_ZERO), to multiply the texture color by the lighting intensity resulting from the previous passes.

Regarding the shaders, we spent much time to make them work but due to difficulties in managing general cases (parallel split shadow maps, multiple lights with shadows) we decided to stick with the fixed functionality for now.
Furthermore, with shaders we would still have to do multiple passes, so the problem of adding the texture contributions would still be there.

devdept
02-08-2012, 10:01 AM
Alfonse,

Do you think we can do perfect parallel-split-shadow-maps with multiple lights yielding shadows on both non-textured and textured object using the arcane fixed function commands?

A this stage we are not trying to go fast but to get the correct result without involving shaders and HDR even if it requires a number passes.

Thanks,

Alberto

Alfonse Reinheart
02-08-2012, 11:08 AM
Furthermore, with shaders we would still have to do multiple passes, so the problem of adding the texture contributions would still be there.

No, it wouldn't. As I said, your problem likely comes from some lighting thing that you've set up but don't quite understand. What you are doing, rendering multiple passes and adding the contributions together, is called "forward rendering". It is a common technique, and it has been done for ages. It is a solved problem.

If you're getting incorrect results, it's clearly because of something you're doing.

The point I was making with shaders is that you would be able to see everything. It wouldn't be hidden in arcane OpenGL fixed function garbage. If there's a bug in your shader code, then just debug it using common shader debugging techniques. This too is a solved problem.

Shaders are easier to use because you can see everything. Every lighting function is executed exactly as you wrote it. Not hidden behind state and variables and junk; it's there, plain as day.


Do you think we can do perfect parallel-split-shadow-maps with multiple lights yielding shadows on both non-textured and textured object using the arcane fixed function commands?

A better question would be... why would you want to?

Why go through all the trouble of implementing a complicated shadow mapping technique... when you're using crappy fixed-function lighting and no HDR or gamma-correction? It's like buying a sports car and using it only for 5 minute drives to the store. Or using hundred-dollar bills as toilet paper.

There's no point in using a high-fidelity graphics technique like shadow mapping if you aren't going to combine it with other high-fidelity graphics techniques.

devdept
02-09-2012, 12:54 AM
Hi Alfonse,


If you're getting incorrect results, it's clearly because of something you're doing.
Yes probably, we need to understand where...


A better question would be... why would you want to?

Why go through all the trouble of implementing a complicated shadow mapping technique... when you're using crappy fixed-function lighting and no HDR or gamma-correction? It's like buying a sports car and using it only for 5 minute drives to the store. Or using hundred-dollar bills as toilet paper.
The reason is simple: we want to run on all GPUs legacy and modern. Even with graphics drivers not updated.

Thanks again,

Alberto

Dark Photon
02-09-2012, 05:50 AM
...with shaders we would still have to do multiple passes, so the problem of adding the texture contributions would still be there.
Not necessarily. From context you're talking about the proposed multiple passes to add the contribution for each light to the scene (accounting for its shadowing). If the number of lights isn't huge, you can do all this in a single pass by doing the lighting/shadowing calcs for all lights in the shader (see next post).

Though at some point though, you bump into limits on the number of uniforms you can have (too many lights/shadow maps), or end up using too much memory for shadow maps, and end up having to break it into multiple passes.

However, in the many lights case, both of these tend to be very wasteful of GPU resources, so you end up thinking about Deferred Rendering techniques.

Dark Photon
02-09-2012, 05:51 AM
...with shaders we would still have to do multiple passes, so the problem of adding the texture contributions would still be there.No, it wouldn't. As I said, your problem likely comes from some lighting thing that you've set up but don't quite understand. What you are doing, rendering multiple passes and adding the contributions together, is called "forward rendering". It is a common technique, and it has been done for ages. It is a solved problem.
This is very misleading. The multiple passes thing doesn't make it Forward Rendering (Deferred uses multiple lighting passes too, and Forward Rendering with multiple lights can be done in a single pass).

Both of these are Forward Shading:

1.

for each object: // CPU loop
for each light: // GPU loop
framebuffer += light_model( object, light )


2.

for each light: // CPU loop
for each object: // CPU loop
framebuffer += light_model( object, light )


The first is 1 pass for all lights. The second is 1 pass per light. Of course, you can do a hybrid of this too. All of these are Forward Rendering.

What makes it Forward Rendering is that determination of shape, visibility, and material properties as well as lighting/shading computations for a specific light source are all tightly coupled and computed together.

When you split "these" up into multiple passes (e.g. split off lighting/shading), that's when it's not Forward Rendering anymore.

devdept
02-10-2012, 01:33 AM
Thanks for clearing this Dark Photon, btw I still can't understand if with the fixed functions commands (no shaders and no HDR) we can achieve a correct multiple-lights-shadows-maps on objects with textures without incurring in RGB values beyond the 0-1 range.

Thanks again.

Alberto

Alfonse Reinheart
02-10-2012, 01:57 AM
The reason is simple: we want to run on all GPUs legacy and modern. Even with graphics drivers not updated.

Except you don't. "All GPUs legacy and modern" don't have shadow mapping of any kind. "All GPUs legacy and modern" don't have the fixed-function combiner logic you'll need to make this work. And then there's the performance from such legacy GPUs.

You can't run what you're talking about on all GPUs. So you have to pick some baseline. Are you trying to catch pre-shader Intel hardware?


I still can't understand if with the fixed functions commands (no shaders and no HDR) we can achieve a correct multiple-lights-shadows-maps on objects with textures without incurring in RGB values beyond the 0-1 range.

Can you? Yes. Will you? Only if you're:

1: Careful about how you set up your lighting. Blowing past 1.0 is very easy to do in anything approaching a realistically lit scene. People abandoned SDR because HDR made it easier to set up proper lighting in a scene (and because it was more physically realistic, and for many other reasons).

2: Careful about how you set up your texture environment stuff. Fixed function lighting and combiners are very easy to get wrong. They're difficult to debug and figure out what went wrong. It's a very fragile system, so you're going to have to spend a lot of time with the OpenGL specification, walking all of the equations it uses to build lighting and combining.

3: Willing to go it more or less alone. Fixed function is dead. Fewer people each day know how it works; many haven't used it for half a decade. You can't even used advanced mobile graphics chips with it anymore; GL ES 2.0 requires shaders. You can't use any form of JavaScript OpenGL; WebGL mandates shaders.

People may remember the simple stuff, but you are trying to perform a (relatively) advanced technique with fixed function. What you want is certainly possible, but you're a lot less likely to get useful help with it than if you did it with shaders.

devdept
02-10-2012, 02:09 AM
I love your answers Alfonse. Thanks so much.

No, we are not considering Intel at all. By the way there are many issue using shaders and not updated video drivers out there. This is the reason why we are trying to avoid them.

Do you know where (internet/book) we can learn how to implement correctly the 'multiple-lights-shadows-maps on objects with textures' using shaders and HDR instead?

Thanks again,

Alberto

Devdept2
02-10-2012, 02:46 AM
...with shaders we would still have to do multiple passes, so the problem of adding the texture contributions would still be there.
Not necessarily. From context you're talking about the proposed multiple passes to add the contribution for each light to the scene (accounting for its shadowing). If the number of lights isn't huge, you can do all this in a single pass by doing the lighting/shadowing calcs for all lights in the shader (see next post).

Though at some point though, you bump into limits on the number of uniforms you can have (too many lights/shadow maps), or end up using too much memory for shadow maps, and end up having to break it into multiple passes.

However, in the many lights case, both of these tend to be very wasteful of GPU resources, so you end up thinking about Deferred Rendering techniques.


That's exactly our case.
We want to manage up to 4 directional lights with shadow mapping.
We managed to make it work with shaders in one pass, but then we added Parallel Split Shadow Map support and exceeded the varying variable limits, so had to split it in 4 passes, doing a light per pass, hence the problem to correctly add texture contributions without saturating them would happen even in this case.

Do you think we should look into Deferred Rendering?

Dark Photon
02-12-2012, 11:50 AM
That's exactly our case.
We want to manage up to 4 directional lights with shadow mapping.
We managed to make it work with shaders in one pass, but then we added Parallel Split Shadow Map support and exceeded the varying variable limits, so had to split it in 4 passes, doing a light per pass, hence the problem to correctly add texture contributions without saturating them would happen even in this case.
Ah, I see. Well, only you are going to be able to get to the bottom of what the difference is between your implementation of 4-lights-1-lighting-passes and 4-lights-4-lighting-passes. If the math is the same, and sufficient precision is maintained throughout, the results should be the same.

One thought on that:


These are the pictures of:

Lighting without textures
Lighting with textures in 1 pass (Light1 and Light2)
Lighting with textures in 2 passes (Light1 + Light2)

http://static.devdept.com/1.jpg

The one thing that ends up "saving your bacon" with the old, fixed-function pipeline (low dynamic range; no tone mapping) when you have bright lights (one or multiple) is that lighting calcs occur in the auto-constructed "vertex shader", the texturing occurs in the auto-constructed fragment shader, and (this is the key:) there is an automatic "color clamp" that occurs between them (i.e. clamp(color,0,1) ). This ends up killing off any overexposed lighting before you apply your MODULATE texuring which gives you nice, pretty, not overexposed colors after texturing, because you're starting with some 0..1 lit color before texturing. This is totally not real, and a major rendering hack, but it works and is "good enough" if you don't need high dynamic range lighting fidelity.

Is it possibly that in your "Lighting with textures in 1 pass (Light1 and Light2)" you have this color clamp between lighting and texturing enabled, whereas with your "Lighting with textures in 2 passes (Light1 + Light2)" you don't?


Do you think we should look into Deferred Rendering?
Given your goal to "run on all GPUs legacy and modern", I'm inclined to think not, but this depends on more than we've discussed here.

Old GPUs don't have as much fill or memory as more recent cards, and you probably don't want to find yourself trying to do Deferred "with good performance" on an ancient SM2 card which wasn't even top-of-its-line at the time it was released anyway (with the accompanying reduction in fill and memory).

...then again, I wouldn't want to be doing 4 light sources with dynamic PSSM shadows on them either. Depending on your bottlenecks and requirements, may or may not turn out to be worth considering. After you get the renderings displaying the same, see if it's fast enough, and if not, take a close look at your bottlenecks. Determine what you can do about those bottlenecks. This is when you might consider Deferred, depending on what they are.

devdept
02-13-2012, 03:15 AM
Dark Photon,

Thanks so much for sharing your opinion. Regarding this:


Given your goal to "run on all GPUs legacy and modern", I'm inclined to think not, but this depends on more than we've discussed here.

Using shaders based PSSM was my first thought but on our customer base we discovered that many times the product crashed becasue of drivers not updated, lack of the necessary uniforms variables, etc.

We quickly switched to the more safe - and under control - fixed function approach.

Do you think that this can be avoided? Is there a way to step into shaders without being afraid of unexpected crashes on the customer side? Even without having a define GPU base line?

Thanks,

Alberto

Alfonse Reinheart
02-13-2012, 09:06 AM
The point we're trying to make is that you already have a "GPU base line": the hardware that can do PSSM. Hardware that can't do that cannot use your program.

So you already have a baseline, whether you want one or not. I would suggest recognizing that and deciding where you really want to draw that baseline. Do you like that baseline, or are you willing to move it forward?

In general, there are two main groups of hardware that you will run afoul of when working with shaders: Intel and unsupported hardware.

Intel hardware barely works, even if you just focus on fixed-function. They spend almost no effort on OpenGL drivers. If you're trying to make something work in OpenGL on their hardware, you need to test on it constantly. And since there are many types of Intel hardware out there, you need to test on it in several different cases.

Unsupported hardware is a related problem. The hardware currently being supported by vendors are all GeForce 6xxx-series and above (including the 3-digit GeForces), and all ATI/AMD HD-series cards (HD-2xxx and above). The biggest gap this leaves is the pre-HD series: 9xxx, Xxxx, and 1Xxxx lines. These are all DX9 cards, and there are still a fair number of them out there. The last drivers for these are somewhere between OK and terrible. So again, lots of testing.

In general, if your application is a game, you can just tell users "update your drivers," and you can expect them to not be using Intel for rendering. If it's more of an application, then you're more likely to have to deal with what they give you, though you can still ask them to update. Either way, you need to test your application on a wide variety of hardware.

Personally, I would say that you need to serve the needs of two groups, and trying to do both at the same time will only make one of them highly annoyed with you. You need a renderer that works on modern hardware (ie: stuff that's actually supported). All the shadow mapping in the world isn't going to make up for the lack of HDR, gamma correction, and other things that users expect from a program meant for modern GPUs.

You need a "baseline" renderer that works on your minimum-spec systems. Your baseline renderer can use fixed function, but it shouldn't be trying to do shadow mapping and the like. The idea here is to be minimally functional and look just decent enough to be OK. Basically, Quake-style rendering should be all you aim for at that level. OpenGL 1.1, maybe up to 1.4 and combiners. But that's it. No shadow mapping, plain vertex lighting, that's all.

devdept
02-15-2012, 03:13 PM
Thanks a lot Alfonse, I understand your point. Maybe we can live with a cosmetic shadow maps on legacy GPUs and do a shader based advanced one on modern GPUs.

Do you know if there is any version number we can rely on to split the GPU world or - as I am already convinced by - we cannot tell until we test it?

Thanks again,

Alberto

Gustavo R. A.
02-15-2012, 06:11 PM
I use gl3wIsSupported(3,3) for a similar problem. I am sure other extension loading libraries have similar functions.

Alfonse Reinheart
02-15-2012, 06:21 PM
Do you know if there is any version number we can rely on to split the GPU world or - as I am already convinced by - we cannot tell until we test it?

What do you mean by that?

I would split it along the lines of what which is supported. Right now, AMD supports all of their HD-class graphics cards. NVIDIA supports GeForce 6xxx and above. Using 2.1 would mean opening yourself up to AMD's pre-3.x hardware, which is no longer being supported (and hasn't been for a year or two).

So your legacy should be whatever minimally capable thing you want to support (say, 1.4), and your modern would be 3.3 or better. That leaves GeForce 6xxx+ users in an unfortunate situation, but at the same time, you'll be able to be a lot more confident that your shaders will actually work.

devdept
02-16-2012, 12:17 PM
Thanks Alfonse,

Therefore you say that checking OpenGL version as 3.3 or higher gives you the confidence that shaders will work for sure? It would be a good starting point...

Devdept2
02-29-2012, 05:06 AM
Hi,

we found out that on a customer with ATI HD 5800 and OpenGL 4.2 the shaders don't work as expected.

I think it could depend on some deprecated / removed functions that we use in our shaders.

So we thought of creating a context targeting OpenGL 3.3, but we discovered that many fixed-functionality stuff is no more there.

Should we use the CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB with 3.3 to make it work until we fully upgrade our code to 3.3?

mbentrup
02-29-2012, 06:51 AM
OpenGL 2.0 or higher guarantees that vertex and fragment shaders are available, OpenGL >= 3.2 guarantees that geometry shaders exist and OpenGL >= 4.0 guarantees that tesselation shaders exist.

The core profile has many legacy APIs and some older GLSL versions removed, but the compatibility profile contains all functions back to OpenGL 1.1 (though an implementation is not required to support the compatibility profile, e.g. Apple supports only the core).

Devdept2
02-29-2012, 07:06 AM
Since we're not using geometry shaders, I wonder if restricting to OpenGL 3.3 as suggested by Alfonse isn't a bit too much.

Maybe we could stick with a 3.0 profile, especially if the compatibility profile is not guaranteed to exist.

Alfonse Reinheart
02-29-2012, 10:57 AM
Since we're not using geometry shaders, I wonder if restricting to OpenGL 3.3 as suggested by Alfonse isn't a bit too much.

You say that as though the only feature that 3.x offers is geometry shaders. There are also:

1: Integer textures and integer vertex attributes.
2: Transform feedback.
3: Uniform Buffer Objects.
4: Red/Green textures and RGTC.
5: Instanced rendering
6: Seamless cubemap filtering.
7: Multiple color output for blending (http://www.opengl.org/registry/specs/ARB/blend_func_extended.txt).


Maybe we could stick with a 3.0 profile, especially if the compatibility profile is not guaranteed to exist.

There's little real point to that. Any hardware that can run 3.0 can run 3.3. The only recent hardware you'll run into that's locked to lower than 3.3 is Intel stuff, but reliance on them is dubious at best anyway. Apple stopped at 3.2 core, but that's about it.

Also, the only platform lacking the compatibility profile is MacOSX.

Devdept2
03-01-2012, 01:05 AM
Hi,

we're not using anything of those new features for now.

Also, a customer with ATI Radeon HD 5800 is still having problems with shaders after we provided a dll which used OpenGL 3.0 context (wrong illumination and strange shadows effects).

So I guess we'll have to disable shaders again by default, until we get one of those "problematic" cards and analyze what's happening with them. :(