View Full Version : Multiple lights framework best practices advice
05-27-2012, 03:03 AM
Hi All.I am pretty newbie in OpenGL dev.I am working on mini engine where I need to implement multiple light sources on demand.So that any objects in the scene can get one or more (or none) number of lights based on user setup .I am thinking on the best way to approach this. Doing it in a single pass inside fragment shader looks like not the best way to go as it seems to me that I will run out of instructions for a big number of lights .Also I will have to generate such a shader on the fly with inputs varying (depending on number of current active lights for this object) ? What is the industry standard approach to design such a system ? May be multi-pass rendering with each additional light written into some buffer and then blended ? Anybody can point to some refs or explanation of how this usually done?
I am using programmable pipeline with OpenGL 3.3 .
Thanks in advance.
05-27-2012, 05:34 AM
From what I have read large number of lights are best done with deferred rendering. You may want to look at this method.
05-28-2012, 04:51 AM
Thanks ,will definitely look into deferred rendering. Anybody else ?
05-28-2012, 05:12 AM
tonyo_au is right, deferred rendering is most likely the way to go. There are a lot of variants you might want to look up, from trivial Z-pre-pass over deferred lighting, inferred lighting, full deferred shading... What variant fits your needs depends on your exact needs.
05-28-2012, 06:38 PM
Thanks ,will definitely look into deferred rendering. Anybody else ?
You didn't actually say how many light sources (max) you might need to support, what types of light sources they can be (directional, point omnidirectional, or point directional [with cone]), what the min/max screen coverage of those individual light sources might be, and what range of GPUs you're targeting. That's a critical piece of input.
Depending on your application's requirements, you probably don't want to go down the road of having to dynamically compile shaders (unless you use the somewhat standard game cheat and have a "loading" screen that comes on every once in a while (locking you out of game play) while you the developer reorganize GPU memory, rebuild shaders, load stuff from disk, etc.). However, even that doesn't decide between forward shading and deferred rendering techniques (and which).
Depending on your light source counts, types, and min/max coverage (and GPU performance/memory), it may be that all you need to do is use standard forward shading and have shader precompiled to support say 10 light sources max. In the case you have < 10 lights, you early out. For more than 10 lights, you just run multiple passes with this same shader and blend the "lit" output results together in a lighting buffer.
Note that while deferred rendering techniques have some great advantages (e.g. minimizing lighting fill, better efficiency for lighting lots of geometric features [triangle edge lighting/shading inefficiency with forward shading]), deferred requires special handling in other areas that "just work" with forward shading (antialiasing, translucency rendering, etc.). So research well and chose wisely.
The typical reason why you might consider deferred rendering techniques over forward is that with forward and lots light sources, you end up spending a ridiculous amount of GPU time and/or bandwidth applying lights to fragments that can't possibility be lit by those light sources (because it's otherwise just too expensive to avoid doing so with forward shading -- where you apply lighting to objects as you rasterize them). Deferred sort of flips the whole problem around and says: lighting "objects" causes lots of inefficiency? So stop it! Rasterize the [opaque] scene into fragments first, and then light the "fragments". This lets you tightly target the lights to only the areas of the screen they are needed, and also to rebatch based on light source coverage to further minimize bandwidth.
So bottom line, to offer more specific advice tailored for your use cases, we need more info.
Hope this helps.
05-29-2012, 05:01 AM
Dark Photon ,thanks for the extensive answer.Well, the idea is to do an offscreen rendering with the final output of each frame going to bitmap. I am not sure I will need hundreds of light sources.I read some articles about deferred rendering and it seems to be an overkill for what I need.The method of multipass you mentioned here looks to me more appealing .So do you mean I have to render each light pass to a lightmap texture and at the end blend those into a final texture ,then render to a full screen quad ?
05-29-2012, 05:17 AM
Well, the idea is to do an offscreen rendering with the final output of each frame going to bitmap.
In that case, is performance a real concern at all? What is the use case here?
05-29-2012, 05:33 AM
The render speed is the concern.I mean , the rendering will be done on commercial hosted GPU farms and the less time render job takes the less $ will be paid for farm usage.So it is important to render the final output fast. But what else is important to me here is the implementation degree of complexity. I don't want to write a complex solution which will be an overkill for the task.All I need here is ability to set some arbitrary number of point lights in the scene + option to exclude any geometry from light influence. That is it.
05-29-2012, 06:26 AM
The method of multipass you mentioned here looks to me more appealing .So do you mean I have to render each light pass to a lightmap texture and at the end blend those into a final texture ,then render to a full screen quad ?
You could do that, and that is algorithmically roughly equivalent to what I'm suggesting, but takes more memory (one texture per pass).
What I'm suggesting is the standard trick of using the GPU's built-in blending hardware to composite them on-the-fly (GL_BLEND/glBlendFunc/etc.), so you only need one render target and don't need a "composite the images" step at the end. That is: allocate a single "lighting buffer" render target (texture or renderbuffer) and clear it. Then each pass blends (ADDs) the lighting from that set of <= 10 lights to the lighting buffer.
You'll probably need a fast depth pre-pass first so that you can avoid adding color/radiance for any occluded fragments when you're rendering your lighting passes.
And of course allocate an HDR framebuffer if you need greater precision or dynamic range than is offered by standard RGB8. For instance, RGB16F or RGB32F.
An intermediate algorithm between what you and I were suggesting is after you render each of your lighting images with a group of <= 10 lights, you use blending to ADD it to the master "lighting buffer". Then you don't need a depth pre-pass, and you don't need N images (where N is the number of lighting passes -- i.e. (NUM_LIGHTS+9)/10), but you do have a separate "composite" step required for each rendered 1..10 lights image.
05-29-2012, 07:26 AM
ability to set some arbitrary number of point lights in the scene
If you want the renderer to be really scalable I'd definitely suggest a deferred approach.
option to exclude any geometry from light influence
No problem anyway. Just batch the geometry that isn't supposed to be lit and render after the lighting pass or adapt your lighting shader(s) to produce some color and then early out.
Powered by vBulletin® Version 4.2.3 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.