PDA

View Full Version : HDR, reproducing this in GL



V-man
01-16-2007, 11:52 AM
Hi, I would like to reproduce this in GL
http://www.daionet.gr.jp/~masa/rthdribl/index.html

Masa has a GDC presentation made, but it's too thin for me. The glare pattern he has generated his excellent. I want to reproduce the entire thing exactly as is, but in GL. Anyone already tried it?

RigidBody
01-16-2007, 10:20 PM
sorry for not answering your question...but i cannot resist to post some links since i read about hdr photography in the german spiegel (mirror) a week ago. basically you take 3 photographs of the same scene, one with short, one with medium and one with long esposure time, and somehow put them together- the results are really amazing. the article is in german, but you can just look at the images (navigate by clicking on "zurück" (back) and "weiter" (forth)).

hdr images (http://www.spiegel.de/netzwelt/tech/0,1518,458050,00.html)

my favourite is this one http://www.spiegel.de/img/0,1020,771075,00.jpg

Tzupy
01-17-2007, 09:57 AM
This looks MUCH better than Oblivion! I'd like to know how the three exposures were combined.

Zengar
01-17-2007, 10:06 AM
I like this effect a lot, it gives the images a nice "magic" touch, making them somehow optimistic and friendly :-) Like this:
http://www.spiegel.de/img/0,1020,771170,00.jpg

I would also like to know how exactly such effect is achieved...

knackered
01-17-2007, 12:44 PM
it makes them look unrealistic.
the iris of your eye would never allow the full range to be seen at once, so to the brain these images just don't look right.

Zengar
01-17-2007, 01:02 PM
You are right :-) But they are still really beautiful

V-man
01-17-2007, 01:07 PM
It's a nice technique, probably used by people who make photos for magazines.
I guess the math would look something like this if you have a single image

original + (some non linear scale * original) + ... = result

I'm still interested in doing a nice glow + glare pattern

flamz
01-17-2007, 05:59 PM
c'mon
these pictures look terrible. straight out of a bad book of CG images attempting to look ultra-realistic.

Zulfiqar Malik
01-18-2007, 01:33 AM
ze artise in zou iz dead flamz :p . LOL ...
Anyway, they do look unrealistic, but nonetheless fantastic :) .

Overmind
01-18-2007, 03:00 AM
They look unrealistic because of bad tone mapping. The original photo is propably either float or RGBE, which of course can not be displayed as it is on a normal computer screen.

Included as texture in a HDR scene with proper tone mapping would look better, but you wouldn't see a difference in a static screenshot.

That's a general problem with HDR screenshots, you can't demonstrate changing exposure on a screenshot, so the only thing you can show are a few fancy effects that generally don't look very realistic :p

But producing realistic output is not really what this technique is trying to do. The point is producing realistic input ;)

k_szczech
01-18-2007, 03:32 AM
Zengar: I have no idea what kind of postprocessing effect is used on that picture but I would start from simple saturation shader. The point is more less to increase differences between red, green and blue without affecting luminance.
Your best bet would be precomputing color map in a 3d texture (32^3, 64^3, or maybe even 128^3).

I'm rather fan of realistic rendering but I did enjoy these colorfully lighted scenes in original Unreal game. It all caused that world to look mysterious and alien but also beautifull and magical.

Tzupy
01-18-2007, 04:04 AM
OK, I believe this is what they do (my German is kind of rusty):
1) The digital cameras deliver only about 1000:1 contrast in one picture, so they take 3 or more pictures with low, medium and long exposure times.
2) By increasing the contrast (sharpen?) in those 3+ pictures, more details are revealed, but I suppose this also adds visible noise. The HDR results are tonemapped back to standard range.
3) The blending of the 3+ outputs of step 2 results in a final output that looks medium exposed, but retains details from all 3+ exposures, and maybe noise levels don't add visibly. It's unclear to me if there's any weighting for low, medium and long exposure times.

Korval
01-18-2007, 08:25 AM
They look unrealistic because of bad tone mapping. The original photo is propably either float or RGBE, which of course can not be displayed as it is on a normal computer screen.The image is a photograph, not a rendering.

M/\dm/\n
01-18-2007, 10:53 AM
I have used this sort of approach on pictures I take with my EOS350D (Rebel XT), but it's not really an automatic process, especially if you want pictures to look good.

The problem is that camera is not acting as the eye. With camera you take the picture as a whole, and you save it as a whole. Eye, on the other hand, adjusts as you move your sight from one object to another (dark<->light transition).

To fix this phenomenon you have to replace over/underexposed parts of the picture with the part from another image where exposure is OK. But I doubt it can be done automatically, as it's definitely not a linear transform, and for the most time it's you, 50% blend eraser and 2 layers of pictures in Photoshop.

You could easily write a program (exp or log function based AFAIK) that fixes exposures if all areas were gradient like, but as soon as there are bright object in shadows or other way around, the method becomes invalid.

By the way, you only need 2 or 3 pictures if they are 8-bit regular ones as banding becomes an issue (especially with JPEG compression). If you use RAW file formats with 12-16 bits per channel you can live with one picture as well, I never do shooting in non RAW formats anymore.

V-man
01-18-2007, 01:07 PM
The human eye is not too different from a camera but the brain does processing on the image. If there is too much glare from one eye and the second eye looks better, it uses that to reduce the glare. It does color adjustments so you don't necessarily see the true color of something.
It obviously does regional brightness adjustment.
I'm sure all this can be programmed into a software.

I think the first pic RigidBody looks perfect. The building looks saturated and terrible. If the technique is well used, it will yeild good results.

Brolingstanz
01-18-2007, 01:55 PM
What's all the "unrealistic" business? I thought that nothing unreal exists :-)

I rather like the building, too. Has sort of a "toony" quality about it. (I half expect to see the likes of jtipton in one of the windows, sporting bright green tights with a big question mark on the chest.)

I think with photography it's all in the lens, exposure time and film plane angles, unless you do retouching in a paint program. But it's amazing the images you can get from simple over exposure (galaxies, misty water falls, all sorts of dreamy looking stuff). My brother built his own camera, one of those big old accordion-like jobs (he's from the old school), and it's amazing what he can do with it, without any digital post processing.

CJ Clark
01-18-2007, 07:31 PM
Hey guys, V-man's looking for info on the original topic, these comments should be a different thread...

Zulfiqar Malik
01-18-2007, 08:00 PM
For everybody's reference:
http://www.hdrsoft.com/

@V-man: I think you meant that the "camera" is not too different from the human eye :p .

Overmind
01-19-2007, 02:26 AM
The image is a photograph, not a rendering.You still need to use tonemapping to display the HDR data on the screen. It doesn't matter if the data rendered or produced by combining photos, you still have to fit the HDR data into 8 bit somehow ;)

knackered
01-19-2007, 05:41 AM
http://en.wikipedia.org/wiki/Tone_mapping

Humus
01-19-2007, 09:05 AM
Originally posted by Tzupy:
This looks MUCH better than Oblivion! I'd like to know how the three exposures were combined. Photoshop CS2 has a "merge to HDR" option. There's also a free app called PhotoMatix Basic that does the same (and better IMHO). What they do is simply reverse the exposure equation to compute the luminance in all pixels and then do a weighted average between the images. In the overexposed picture the darker regions is given higher weighting, and in the underexposed picture the brighter regions are given higher weighting. Summed together from a bunch of images (3 for normal scenes, maybe 5-7 for huge ranges) you get a good quality HDR image. Then the hard part is doing a reaonable tonemapping to get something that looks somewhat similar to how a human viewer would have seen the scene in real life.

k_szczech
01-19-2007, 10:12 AM
We're way off the original topic here.

Ok, if I would implement such HDR effect as in the first link I would do it like this:
1. basic bloom / tone mapping - the standard way

2. glare - apply additional filters to blurred image - these filters blur image in one direction only to create different glare patterns. Note that each color component can use different filter values to produce some spectral/scattering effect. You can also use circles instead of lines to create some extra glow.

3. lens flare - apply additional filter to blurred image - each point samples at multiple positions along a line that croses this point and the center of screen - sample points placed further from center have smaller wieghts to make flares fade out if not looking towards light source.

Tzupy
01-19-2007, 12:14 PM
@Humus: I appreciate your answer. So the intermediary results are tonemapped only for on-screen display, the final blended image is still HDR prior to tonemapping to standard range.
About a year ago I wrote a 256-shades to 6-shades conversion software, for industrial ink-jet printers, using error diffusion, that works well but loses some contrast. I wondered if this HDR technique could be used to compensate.

k_szczech
01-19-2007, 05:33 PM
Yeah, I'm an addict... Couldn't resist. ;)
5 hours of work, and here it is:
http://ks-media.ehost.pl/opengl_org/postprocess01s.jpg (http://ks-media.ehost.pl/opengl_org/postprocess01.jpg)
It looks ugly comparing to that DirectX demo, but it works 3 times faster. And it's poorly written - could run faster or look beter. Or both.
Cheers.

Edit: I just had to try it out with my game. Not very realistic but still interesting and a little magical:
http://ks-media.ehost.pl/opengl_org/postprocess02s.jpg (http://ks-media.ehost.pl/opengl_org/postprocess02.jpg) http://ks-media.ehost.pl/opengl_org/postprocess03s.jpg (http://ks-media.ehost.pl/opengl_org/postprocess03.jpg) http://ks-media.ehost.pl/opengl_org/postprocess04s.jpg (http://ks-media.ehost.pl/opengl_org/postprocess04.jpg)

I should probably thank V-man for inspiration ;)

V-man
01-20-2007, 03:32 PM
It looks ugly because you have things like trees and water. The link I gave just has spheres and no one can argue that look ugly. Shcmoove!
Well, the author has not responded yet :(

k_szczech
01-20-2007, 10:25 PM
Yes, I did mention it looks ugly :) What else can you expect from such experiments? The key was to see if I can apply such large patterns to my filters. I'll probably improve it in the future and maybe then i'll use it somewhere.
For now, I just wanted to see if I can achieve something similar with similar framerate.

I'm applying these patterns on 4x downsampled and further blurred image. This allows to place pattern's samples quite apart from each other and create large blurs with good framerate.

I came up with an idea of 3-4 pass jitter+blur. It would work something like this:
1st pass: sample 16 pixels apart +/-8 pixels jitter
2nd pass: sample 4 pixels apart +/-2 pixels jitter
3rd pass: sample 2 pixels apart - no jitter

Should give fast and strong blur. I'll have to try it one day.

kon
01-21-2007, 03:34 AM
V-man, I emailed the author two years ago for additional information and didn't get an answer...
Well, you can find a similar demo in the nvidia sdk
http://download.nvidia.com/developer/SDK/Individual_Samples/samples.html
named 'HDR with 2x FP16 MRTs'. It uses DirectX but it shouldn't be very difficult to convert it to OpenGL! The User Guide has some information about the way it is done.

HellKnight
01-26-2007, 12:56 AM
Here you go:

Recovering High Dynamic Range Radiance Maps from Photographs (http://www.debevec.org/Research/HDR/debevec-siggraph97.pdf)

It's not that primitive, I warned you :)

Mikkel Gjoel
01-28-2007, 10:26 AM
v-man - check these ati-presentations out - probably quite similar but I don't recall which one I saw :D

http://www2.ati.com/developer/ScenePostProcessing.pps
http://ati.amd.com/developer/gdce/Oat-ScenePostprocessing.pdf
http://ati.amd.com/developer/gdc/GDC2003_ScenePostprocessing.pdf