Approximation to Global Illumination

Dear All,

I’m intending to implement an approximation to a glo bal illumination in a project I’m working on. To do this I intend to take “snapshots” of my scene from the top and from all sides and to take the average of intensities in the scene to use as an approximation to the solution.

I’m confident with the mathematical side of it, however some aspects on the opengl side I’m not so confident with and its those I’m hoping you guys can help me with.

Can someone give me an overview of the steps needed to setup and render to five off screen buffers so that i can pass them to my shaders for lookup purposes? I basically want to average the intensity over these frame buffers and pass them as a global ambient value to my already implemented vertex shader.

Thanks guys

theres two methods

A/ render the scene into the main window from each side + copy that result into an already existing texture with glCopyTexSubImage2d( … )
B/ setup some FBOs (which act like rendering straight into a texture ) + bind one + then render scene + then bind another + render from a different direction. (check the FBO spec for an example of how to set them up)

(both methods are gonna performa about the same #A maybe quicker but #B doesnt have the restrictions of the screensize etc)

either way ultimatly u have 5 textures. which u can use in your global illumination shader

did you consider sending a cube map to the shader? This sounds interesting, can you point some ressources (links) i can chew on?

I did this years ago (1996?).

It’s quite simple. You render a hemicube from each point on the object. You position the eye on the surface and orient it to the appropriate face of the hemicube using rotates on the modelview matrix. One face is square the other 4 are rectangular. You can use viewport and an asymmetric frustum for those (0 - 45 degrees in y).

I recommend you create a single surface to draw to and render all 5 views (at the time I just used the backbuffer). I modulated these with a blended image draw for the tan theta pixel solid angle and cos theta incidence modulation in hardware (you’d do this with a texture blit over all images) You can then simply read everything back and sum the pixels for total incidence tally, it’s a no brainer.

So; Single drawable surface, 5 viewports (1 square 4 rectangles). Draw hemicube with modelview orientation. Modulate for pixel solid angle, modulate for incidence angle at the pixel. Read back and tally totals.

Repeat for each point of calculation.

You can do the same thing for ambient occlusion calculations, (drawing the self occluders in black against a white screen clear or better) but you can get away with a lot more.

Thx Dorbie, this sounds fairly simple in theory. But in practice, im not sure I can manage to come up with a workable solution from scratch. That said, I d like to see a working sample and make some inverse engineering to implement it in my engine. If its not too much trouble, any chance you got something to share with us?

tut tut, lazy.

For some reason, I saw that one coming… Sorry to disappoint you, I do have many weaknesses but laziness is not one of them. This is a really interesting topic, I m curious but I m no coder Jedi.

bollocks, you read dorbie’s reply and your first response was not to discuss it further, it was to ask for source. That’s not curiosity, that’s laziness.
At least have the balls to admit it.
Evidence:-
In response to zed’s post:-

did you consider sending a cube map to the shader? This sounds interesting, can you point some ressources (links) i can chew on?
I think we all know what you mean by ‘resources’.

In response to dorbie’s clear explanation of his algorithm:-

in practice, im not sure I can manage to come up with a workable solution from scratch. That said, I d like to see a working sample and make some inverse engineering to implement it in my engine.

If you want a use the laziness definition, be my guest. Quote: Laziness is the foundation of effectiveness. Plus, this is not a high priority subject for me right now and it is clearly beyond my expertise. If there is a possibility to implement this in real time, I m interested to take a look at it. I cant afford to spend precious development time on this at the moment. Few people can, I cant. Im not sure what it is not clear to you here and why you insist on this… One thing is for sure, that im lazy or not, your comment is absolutely irrelevant for this thread.

You don’t get something for nothing. If you’re not prepared to meet someone half way by at least showing you’ve made some effort towards understanding a subject somebody else has put actual work into, then you don’t deserve any help, let alone drop-in source code.
This is very relevant to this thread - there was a couple of descriptions of different approaches which you have failed to discuss further, even though it would have been contributing to the value of the forums knowledge base - you just asked for source code. Calling you lazy is being kind - I can think of many other words that would be nearer the mark.

There is a demo called
Dynamic Ambient Occlusion

http://http.download.nvidia.com/developer/SDK/Individual_Samples/3dgraphics_samples.html

Thx V-vam, that’s all I needed. I ll take a look at it and then, I might have something to talk about!
Regarding to knackered, it seams like a national security code. I do understand his point and where is coming from, but seriously, can someone bring him a drink, I think the man is dry!

Yes that’s right golgoth, it’s my problem not yours.
Good luck claiming credit for other peoples work. Personally I haven’t the stomach for it.

Very nice read here:
http://graphics.cs.ucf.edu/GPUassistedGI/GPUGISubmission.pdf

Cheers

Well, at this point I do agree with you! Taking credit for other people s work stinks. The scenario you have in mind is to take a product, scratch the company name, put yours on it and say, look what I ve done. Some people are smarter, car manufacturers for instance, take others companies cars, put them in 1000s of pieces and learn from the process. That s inverse engineering. Welcome to the human race!

I m no mathematician crack head, but I ll take a look at it Flavious! Thx very much.

Originally posted by Golgoth:
I m no mathematician crack head, but I ll take a look at it Flavious! Thx very much.
In other words: “yeah yeah flavious, whatever, thanks but I’ve got my link to source code, so I don’t need to actually understand it.”

If you’re asking people on this forum for source, you’re either asking them to give you source code technically owned by their employers, or asking them to just hand to you on a plate stuff they’ve worked hard on through their own research/spare time.
Either way the least you owe is to engage them in discussion about the ideas and methods behind the proposed source code.

By the way, I don’t think you understand what reverse engineering is (or inverse engineering as you put it). Are you confusing it with the action of hitting CTRL+C and CTRL+V ?
True reverse engineering of software is precisely not having the source code. If you’ve got the source code you don’t have to reverse engineer it, brainiac!
To use your car example, it’s like nissan giving away their CAD designs, material specifications, manufacturers address and VAT number.
Welcome to the market economy, golgoth.

Alright, alright, I got your point knackered, what do you want me to say… Its just a question of what comes first… I ll do my home works and ask questions when I know what I m talking about. Gees, no need to burn me for that. If you see evil everywhere, that s your problem. My apologies for those who have read this. Im out.

That code was written for an SGI workstation and used Iris GL callbacks and a scene graph API called Performer, AND I don’t have the code.

Any cubemap rendering demo should be hackable for you.

Here is something else I wrote that has the object space transformations for a cubemap rendering, they are shown here as premultiplications on a modelview matrix class. You would transform the viewpoint to the surface sample point and orient along the normal vector:

http://www.sgi.com/products/software/performer/brew/envmap.html

In this case it’s OpenGL and Performer code so it still won’t work for you but you should get the general idea for your own OpenGL code.

FYI, you should learn when not to rise to knackered’s bait. He’s a smart guy but he prefers to be critical of people asking for too much help (as he sees it) rather than offer that help. I remember what it was like starting out in graphics up to my neck in new concepts and strange code and so I have no problem with you asking for sample code, I just don’t have any to offer :slight_smile:

Build your code in small steps, render your main cube face of the hemicube first oriented with the local surface normal, next add the other hemicube viewports & faces with orientation, then do the hemispherical modulation for cos theta (you can draw a dome in eye space with vertex color or alpha), then apply the pixel solid angle modulation simple 2d textured quad, next readback and total, finally march the whole shebang over your surfaces to sample incident light across them.

Take small steps verify each is functionally correct before proceeding. If you write a whole bunch of code to do this all at once you’ll end up with a debugging nightmare, especially with the transformation stack required for the hemicube faces.

Good luck.

Thx Dorbie, this is refreshing after all the heat. However, I m still grinding with lighting rendering logic (does it ever stops :wink: ) and with the multiple off-screen rendering pipeline with FBOs… I ve been experiencing a lot but im not ready yet… I still have so many question marks that I wouldn’t know where to start at this point. So, I still have heavy work to do and then im going to get my hands dirty with this. Hopefully. I would be able to bring more meat to the table next time. Thx again!

Regards,

You only need one offscreen buffer, use multiple viewports and scissor for the hemicube faces.

Just to get the ball rolling start just drawing to the backbuffer. When I was coding this I put a swapbuffer in there after each hemicube so I could see the scene being drawn, pretty cool looking stuff. If you abut the four half faces correctly you can also use the same solid angle pixel modulation (just FYI).

P.S. when I first did this it was actually ambient oclusion I calculated because my first pass had a huge area illuminator (the sky), of course back then nobody (AFAIK) called it ambient occlusion. That can get annoying when someone asks you years later if you know what “ambient occlusion” is as if buzzword bingo counts for ****.