HDR Rendering

Hi

I understood the basics of HDR rendering and it seems to be a really nice technique.

But there is one thing which i don´t know how to do:

After rendering you certainly want to scale the image into one specific range. However to do this you would need to know the average brightness. How would one get that information?

A glReadPixels - and then calculating it on the CPU - would certainly be possible, but not efficient. So, how would one (ie. Valve) do that?

Thanks,
Jan.

AFAIK there’s no standard way. Readback would be a good thing, but you may have knowledge of the scene that might help. In theory you could build a whole adaption model in there to slowly adjust the exposure as you move from light to shadow. How that’s driven could be readback, database based “this BSP leaf is indoors”, “this BSP leaf is outdoors”, “It is night time”, etc. or just some approximation based on what you know of the scene at the application level.

Check out the HDRLighting demo by Microsoft in the DX 9 summer update SDK. They also have some docs talking about this kind of thing. It’s a cool demo but for some weird reason it only wants to run in REF mode on my GeForce FX. shrug

-SirKnight

what about this:
(i assume, your data is drawn into a fp-pbuffer and is allready bound to a texture.)

create a second pbuffer.

  1. draw a textured quad with half of the size from the original buffer. (ie.: for a 512x512 source, a 256x256 quad)
  2. in a fragment program, for every drawn fragment sum up the four texels from the sourcetexture(it’s up to you to decide if you want to average here or simply add all together…i would suggest the second one because it’s more physically correct. but the first one could also give good results)
  3. make this texture the source texture, and repeat with 1. until there is only one pixel
    left.
  4. finally read this one pixel using glReadpixels.

this way you can get the whole brightness of the screen, without reading all data back…only what you needed.
with a hw which is capable of doing 16 texture accesses per fragment, you can even increase your “shrikrate” and reduce the number of passes.
this method should have at least the same speed like automatic mipmap generation (actually, this is how mipmaps are created…so if it’s slower try to get rid of the pbufferswitches and use copytotexture).

[EDIT]
actually, point 4. isn’t really needed.
you can simply use the final one-pixel-texture in the fragmentprogramm that does the final scaling as input texture, so you can keep everything on the gpu without using any redback method.

Sounds like you could do it with auto generating MIP maps and one texture itteration.

does automatic mipmap generation work with fp textures ?

no current FP buffer/texture implementation supports mipmapping, so the auto generation doenst work either.

I don´t use the technique yet, so i don´t know the “details” to implement it.
I wouldn´t like to use a p-buffer (i simply don´t like them, that´s all :wink: ). I would go for a render-to-texture approach (i hope that new extension gets implemented soon!!! :smiley: )

The mipmapping thing sounds very very promising, that´s quite a smart solution.
But if i would want to use automatic mip generation, wouldn´t i also need the npot extension? Else i could only render to textures, which waste a lot of memory.

But if i handle that mipmapping myself, i could work around such issues, i think.

Another thing. ATI has a very nice HDR demo (“DebevecRNL” or so), but i couldn´t find any source or tutorial. It´s seems not to be in the SDK. Does anyone know more about it?

Thanks for all.
Jan.

You must convert the HDR buffer to RGB8 for scanout anyway. You can use the auto-MIP-map of that conversion to control the exposure for the next frame (or, if it’s too much off, perhaps re-run the conversion for the current frame).

You also might want to use, say, the 4x4 or 8x8 surface and use the median sample or some heuristics/database entries for your target, rather than a full average; this is a bit more like an SLR camera would do exposure control.

Another approach is to use a mapping that asymptotically approaches 1. I think 2*atan(x)/pi is one such function.

That way, you don’t have to worry about exposure control, and you still get a nice good dynamic range. Dark places are darker than bright, etc.

You might want to do this on luminance, and do something else with chrominance information, though, else colors will wash out when the light gets bright. That MIGHT want what you want, but it also might not. In deferred shading, you can do everything in luminance “diffuse / specular” two-component space, and only modulate diffuse color when converting back. This will preserve chrominance, and give a different look.

A third way of doing it is to set pre-set exposure controls in different environments, and lerp between them as you move around.

Because real-time HDR is so new, I don’t think there’s any “canonical” way of doing it yet. I envy you if you’re getting to work on this subject “for real” right now :wink:

Originally posted by Jan2000:
[…]
Another thing. ATI has a very nice HDR demo (“DebevecRNL” or so), but i couldn´t find any source or tutorial. It´s seems not to be in the SDK. Does anyone know more about it?

this one ?
http://mirror.ati.com/developer/SIGGRAPH02/ATIHardwareShading_2002_Chapter3-1.pdf

Ah great! I didn´t expect it to be hidden so well :slight_smile:

Jan.

Nice, brief overview of many techniques:

http://graphics.cs.uni-sb.de/Courses/ss02/CG2/folien/12ToneMapping.pdf