3rd Year CS Student Project Problem

Hi All,

I have been assigned a challenging project for my 3rd year individual project and I have some questions about it.

My challenge is to render a scene that contains a very large amount of lampposts. (read 100’s)

Now I have a question regarding this. According to what I’ve read, OpenGL only supports 8 lights.

Can someone point me to the right kind of topic I need to look at so I can realise this project?

if the lamp posts are all static then you can bake all the light’s into one texture.

or

do a multipass rendering where each pass you render a few light’s at a time.

or

use some kind of monster shader and simulate all the light’s at once, it is possible if you have the right hardware.

But for more than 100 light’s the common approach would be using an lightmap texture.
Besides, for 100+ light’s you shouldn’t use openGLs built in light’s, you need some kind of shader.

I had read something about subdivision rendering. Is this also a possible way? Or is this the multipass rendering you mentioned?

Assuming the lights are distance attenuated, it is unlikely more than 8 lights are needed for each object. What you should do is determine the 8 most significant (closest or brightest) light sources to each object and reuse the 8 (or less) lights.

This is probably the easiest approach.

Generally, I would support zeoverlord’s suggestions #1 and #2.

What kind of approach you use strongly depends on features you’re trying to implement. If you just want lots of static lights then lightmaps are perfect.
If you want some dynamic lights, then you use lightmaps for static lights and some other technique for dynamic lights. Most games today combine lightmaps with dynamic lights based on shaders.

Tell us more about your project - what kind of light sources will you have, what kind of scene will it be and so on.
We can then give you just one solution.

Its supposed to be an urban scene - just a bunch of buildings.

The actual project brief is real time illumination of an urban scene featuring 500 lampposts. These lampposts will be like spot lights - they aren’t “open” and will only shine light downwards.

There will also be certain objects in the scene such as benches which need to be near lights and therefore cast shadows. I’ve been looking into both shadow mapping and shadow volumes.

My tutor had mentioned something about subdividing and rendering a set of lights and its lighting for each subdivision but I’m having a hard time finding papers to read up about that.

She also wants me to possibly if I complete in time, to add a couple of moving cars to the scene and therefore lighting/shadowing will change when a car passes under a lamppost. Thanks for your help so far. Any more you can give would be great.

Wow, just wow, what a task.
It’s definitely doable, the first part can be done with method #1.

Though if you want to add the second part then you have to do method #2, and that takes a lot more processing power and i don’t know if it’s even possible to do 500 shadow mapped light’s on todays hardware(without having a FPS of 0.001).
Unless you cheat that is.

step 1: shadowmap everything static with a bunch of shadow maps

step 2: render textures to the framebuffer and lightmaps into a separate FBO.

step 3: all dynamic content(cars) should cast a simple drop shadow on the lightmap FBO to give the illusion of shadows.

step 4: all dynamic content should be lit separately (ie not with lightmaps, but with shaders or built in lighting) by the 3-5 closest light’s.

step 5: finally merge the lightmap fbo with the framebuffer using multiplicative blending.

And BTW the subdivision rendering your teacher was talking about is probably just the method of subdividing geometry so that opengl’s built in light will render more smoothly, with lightmaps or ppl shaders this won’t be needed.

For 500 lights, I think something like deferred shading would start to make sense. Although I’m not sure how this method interacts with dynamic shadow algorithms.

the subdivision rendering your teacher was talking about is probably just the method of subdividing geometry
I don’t agree - I think what his tutor means is to subdivide scenery winto small areas - each one receiving only a few lights.
This will be the most important for performance.

I think something like deferred shading would start to make sense
Again, I don’t agree. Deferred shading would help if multple lights would apply to the same polygons, but in this case we would have an average of 2-3 lights or less. Besides, in deferred shading the shader in the lighting pass should be aware of all 500 lights or you will be forced to render fullscreen quad multiple times.

Basically zeoverlord’s suggestion is good (and fast), but I would suggest not using any FBO and render to texture stuff. This should be enough:

  1. render entire scene using base texture in unit #0 and lightmap in unit #1
  2. render simple shadows under cars - use some polygon offset to avoid z-fighting but also set depth mask to GL_FALSE

This way you do not need to subdivide the scene, but you also don’t get realistic shadows from dynamic objects.
If you want to use true dynamic shadows, then I see no other way than subdividing the scene.

How to subdivide depends on few thngs:
-does scene have moving lights?
-does scene have moving objects?
-is lighting model diffuse only or diffuse + specular?
-is specular lighting attenuated the same way as diffuse, other way than diffuse or not at all? (in real world specular lighting is nearly unattenuated, but it would be a nightmare in rendering not to attenuate specular lighting)
-does intensity and color of lights change?
-are there any special light sources (ambient lighting, sunlight, moonlight)?

The problem is, that you must make your decision now since it will be very difficult to add features later - most of lighting features require certain desingn of rendering engine.
If you use lightmaps then you will have to spend some time on algorightm to layout lightmaps on the scene and to calculate their contents and you will be limited to static lights and objects.
If you want dynamic lights, moving objects and proper shadows, then lightmaps are almost at of no use to you (however, you could still use them at areas that are static at the moment).

Review the questions above and think of features you want. It all can be done, but some of them will require completely different approaches.

Again, I don’t agree. Deferred shading would help if multple lights would apply to the same polygons, but in this case we would have an average of 2-3 lights or less. Besides, in deferred shading the shader in the lighting pass should be aware of all 500 lights or you will be forced to render fullscreen quad multiple times.
I think you misunderstood the idea of deferred shading. The deferred shading shader only needs to be aware of one light, like in a standard multipass algorithm.

But you don’t need to render all geometry 500 times. You just render cone for each light that covers exactly the screen space area that can be affected by the light.

So deferred shading is optimal for a huge number of lights each only covering a small portion of the screen. The render time of this algorithm depends only on the number of lit fragments. It doesn’t matter how many lights you have, as long as they don’t overlap too much.

But you don’t need to render all geometry 500 times
I know. I mentioned “fullscreen quads” which was my mistake - you actually need to render a set of 2D cones that cover the area affected by light sources they represent.

I think you misunderstood the idea of deferred shading. The deferred shading shader only needs to be aware of one light
Well, if someone want to use displacement mapping with multiple lights then deferred shading is a great speed up since you only need to do the expensive displacement once.
For simplier scenes with no normalmaps and complex shaders it does not add much speed. It will save geometry processing, but will (in this case) increase memory bandtwidth consumption, here is why:
-when you look at the ground from 1st person view, then area covered by polygons that receive light is much smaller than the light cone projected to screen for use with deferred shading
-in deferred shading for each pack of light sources that are processed in single pass you have to access at least two floating-point textures with stored values to get position, normal and color
-part of the pixels rendered directly in 3D can be zkipped by early z test (hehe, “zkipped”?) (Edit: Ok, that’s not true - you can use early z test in deferred shading, but you can’t use culling - imagine a wall between you and the light source - you don’t render any wall, because one is facing away from observer and the other is facing away from light source)
One more fact that speaks against using deferred shading is that this is a beginners forum :slight_smile:

Well, I just read one of previous posts again:

The actual project brief is real time illumination of an urban scene

which need to be near lights and therefore cast shadows
Yeah, “real time”… I’m beginning to feel an urge to create such appliation myself. Guess I’m an addict :slight_smile:
I’ll just assume that:
-lights do not move, but can change color (flickering lights)
-there are some moving objects but not too many
-there is no specular lighting
-there is ambient lighting

This is how I would do it:

  1. each light has a bounding cone
  2. each light has it’s own shadow map (256x256x16 gives 128KB per light) - since there is low distance between light and the ground you could also use 8-bit depth precision. To accomplish this you would have to use Alpha textures instead of Depth textures and use Alpha test instead of depth test.
  3. each light has a list of polygons that are in it’s range

Now, when rendering you first need to generate shadowmaps. For each light you should store a boolean value that says if it’s shadowmap is valid. Initially, all are invalid.

  1. If there is a moving object inside or partially inside light’s bounding cone, then light’s shadowmap becomes invalid
  2. If shadowmap is invalid, and light’s bounding cone is in view frustum then render all polygons in light’s list and all moving objects inside light’s cone to shadowmap. Validate the shadowmap.
  3. If the number of shadowmaps you have updated in current frame is very small then update some of the invalid shadowmaps that are currently out of view - this is not a must but will help to prevent mass shadowmap updates.

With a set of valid shadowmaps you can do the rendering:

  1. Render everything that is inside view frustum using ambient lighting, but no textures yet.
  2. For each light source that has it’s bounding cone at least partially in view frustum, render all polygons from it’s list and all moving objects in it’s range using this light’s shadowmap - add new lighting to the scene
  3. Render entire scene again - use textures now and multiply them with the framebuffer

As for the #2 - you could use standard OpenGL diffuse lighting if your geometry doesn’t have large polygons (polygons should be smaller than their distance to the light for at least average results). Otherwise you would have to do per-pixel lighting. You could use a texture for this, but I’ll leave that for later.

k_szczech

The lights will be the same intensity all the time. No flickering here :slight_smile: The lights will not move in any way shape or form.

How does the difference between diffuse or diffuse+specular alter the way of subdivision?

Any pointers to good material on subdividing?

The lights will be the same intensity all the time. No flickering here The lights will not move in any way shape or form.
So much for “realtime”… :slight_smile:
If your lighting is 100% static, then you can use lightmaps, as mentioned before. There is no better solution for static lighting I suppose. The lightmaps can also include shadows. You also do not need any subdivide for lightmaps. However, this will not be “realtime” :slight_smile:

How does the difference between diffuse or diffuse+specular alter the way of subdivision?
Well - the specular lighting is on the shiny surfaces. Imagine that you place a shiny object somewhere far away from light source - it will receive so little diffuse lighting that it will be black, but you can still see light reflected in it. That means specular lighting will cause that larger number of polygons will be affected by light source than with diffuse lighting.
I’ve found these two screenshots:
(Edit:
http://www.ixbt.com/video2/gffx-ref-p2.shtml
you must open this webpage to be able to view these screenshots since the server forbids direct access to the images. After the page is opened images will be accessible using direct links)

  1. Diffuse lighting
    http://www.ixbt.com/video2/images/dx9-syn/ps2-1-b.jpg
    You can see that at bottom of image lighting is pretty weak. A bit further away and it would be gone completely.
  2. Diffuse + specular
    http://www.ixbt.com/video2/images/dx9-syn/ps2-2-b.jpg
    Reflected light is strong even at the bottom of screen. Even if you move the camera away the specular lighting at pixels near camera will remain strong.
    Realistic specular lighting, especially with shadows can cause a lot of performance problems. This is why I assumed you won’t be using it.

Any pointers to good material on subdividing?
You only need that for realtime lighting. There are some general algorithms, but you should choose approach that fits your scene best. From my previous post:

  1. each light has a list of polygons that are in it’s range
    This is actually a way to subdivide the scene that I would suggest.