HDR, reproducing this in GL

Hi, I would like to reproduce this in GL
http://www.daionet.gr.jp/~masa/rthdribl/index.html

Masa has a GDC presentation made, but it’s too thin for me. The glare pattern he has generated his excellent. I want to reproduce the entire thing exactly as is, but in GL. Anyone already tried it?

sorry for not answering your question…but i cannot resist to post some links since i read about hdr photography in the german spiegel (mirror) a week ago. basically you take 3 photographs of the same scene, one with short, one with medium and one with long esposure time, and somehow put them together- the results are really amazing. the article is in german, but you can just look at the images (navigate by clicking on “zurück” (back) and “weiter” (forth)).

hdr images

my favourite is this one

This looks MUCH better than Oblivion! I’d like to know how the three exposures were combined.

I like this effect a lot, it gives the images a nice “magic” touch, making them somehow optimistic and friendly :slight_smile: Like this:

I would also like to know how exactly such effect is achieved…

it makes them look unrealistic.
the iris of your eye would never allow the full range to be seen at once, so to the brain these images just don’t look right.

You are right :slight_smile: But they are still really beautiful

It’s a nice technique, probably used by people who make photos for magazines.
I guess the math would look something like this if you have a single image

original + (some non linear scale * original) + … = result

I’m still interested in doing a nice glow + glare pattern

c’mon
these pictures look terrible. straight out of a bad book of CG images attempting to look ultra-realistic.

ze artise in zou iz dead flamz :stuck_out_tongue: . LOL …
Anyway, they do look unrealistic, but nonetheless fantastic :slight_smile: .

They look unrealistic because of bad tone mapping. The original photo is propably either float or RGBE, which of course can not be displayed as it is on a normal computer screen.

Included as texture in a HDR scene with proper tone mapping would look better, but you wouldn’t see a difference in a static screenshot.

That’s a general problem with HDR screenshots, you can’t demonstrate changing exposure on a screenshot, so the only thing you can show are a few fancy effects that generally don’t look very realistic :stuck_out_tongue:

But producing realistic output is not really what this technique is trying to do. The point is producing realistic input :wink:

Zengar: I have no idea what kind of postprocessing effect is used on that picture but I would start from simple saturation shader. The point is more less to increase differences between red, green and blue without affecting luminance.
Your best bet would be precomputing color map in a 3d texture (32^3, 64^3, or maybe even 128^3).

I’m rather fan of realistic rendering but I did enjoy these colorfully lighted scenes in original Unreal game. It all caused that world to look mysterious and alien but also beautifull and magical.

OK, I believe this is what they do (my German is kind of rusty):

  1. The digital cameras deliver only about 1000:1 contrast in one picture, so they take 3 or more pictures with low, medium and long exposure times.
  2. By increasing the contrast (sharpen?) in those 3+ pictures, more details are revealed, but I suppose this also adds visible noise. The HDR results are tonemapped back to standard range.
  3. The blending of the 3+ outputs of step 2 results in a final output that looks medium exposed, but retains details from all 3+ exposures, and maybe noise levels don’t add visibly. It’s unclear to me if there’s any weighting for low, medium and long exposure times.

They look unrealistic because of bad tone mapping. The original photo is propably either float or RGBE, which of course can not be displayed as it is on a normal computer screen.
The image is a photograph, not a rendering.

I have used this sort of approach on pictures I take with my EOS350D (Rebel XT), but it’s not really an automatic process, especially if you want pictures to look good.

The problem is that camera is not acting as the eye. With camera you take the picture as a whole, and you save it as a whole. Eye, on the other hand, adjusts as you move your sight from one object to another (dark<->light transition).

To fix this phenomenon you have to replace over/underexposed parts of the picture with the part from another image where exposure is OK. But I doubt it can be done automatically, as it’s definitely not a linear transform, and for the most time it’s you, 50% blend eraser and 2 layers of pictures in Photoshop.

You could easily write a program (exp or log function based AFAIK) that fixes exposures if all areas were gradient like, but as soon as there are bright object in shadows or other way around, the method becomes invalid.

By the way, you only need 2 or 3 pictures if they are 8-bit regular ones as banding becomes an issue (especially with JPEG compression). If you use RAW file formats with 12-16 bits per channel you can live with one picture as well, I never do shooting in non RAW formats anymore.

The human eye is not too different from a camera but the brain does processing on the image. If there is too much glare from one eye and the second eye looks better, it uses that to reduce the glare. It does color adjustments so you don’t necessarily see the true color of something.
It obviously does regional brightness adjustment.
I’m sure all this can be programmed into a software.

I think the first pic RigidBody looks perfect. The building looks saturated and terrible. If the technique is well used, it will yeild good results.

What’s all the “unrealistic” business? I thought that nothing unreal exists :slight_smile:

I rather like the building, too. Has sort of a “toony” quality about it. (I half expect to see the likes of jtipton in one of the windows, sporting bright green tights with a big question mark on the chest.)

I think with photography it’s all in the lens, exposure time and film plane angles, unless you do retouching in a paint program. But it’s amazing the images you can get from simple over exposure (galaxies, misty water falls, all sorts of dreamy looking stuff). My brother built his own camera, one of those big old accordion-like jobs (he’s from the old school), and it’s amazing what he can do with it, without any digital post processing.

Hey guys, V-man’s looking for info on the original topic, these comments should be a different thread…

For everybody’s reference:
http://www.hdrsoft.com/

@V-man: I think you meant that the “camera” is not too different from the human eye :stuck_out_tongue: .

The image is a photograph, not a rendering.
You still need to use tonemapping to display the HDR data on the screen. It doesn’t matter if the data rendered or produced by combining photos, you still have to fit the HDR data into 8 bit somehow :wink:

http://en.wikipedia.org/wiki/Tone_mapping