Dynamic Range

I was thinking about what would be the fastest way to get the average color of the screen. I would use it to normalize light intensity to prevent clamping at 1. This would simulating how the iris deals with the vastly different intensities of light between say, outside at noon, and a room lit by candles.

I would give my lights a pure color value (100 percent saturation), and then have a seperate intensity value. The sun would probably be 1000 while a candle would be around 10.

An example of how this would look. If I had a candle inside a building the engine would scale its intensity so that it would effect the walls. But if I went outside, its intensity would almost scale to 0, resulting in the expected behavior of a flashlight not having any visible effect during the day.

Lets say I had 3 lights in a scene. One light has an intensity of 100, while the other two have an intensity of 50. So, if they all contributed 100 percent then a surface would reflect an intensity of 200. If I wanted to have this entire dynamic range then I would map 200 to RGB = 1,1,1

However, it is not very likely that all these lights will shine all on one surface at full intensity, resulting in a very dim scene if I mapped the brightest possiblity to RGB=1,1,1. So I am guessing that I probably want to find the average intensity on the screen and then map that to RGB = .5,.5,.5

I think that the first frame would need to be rendered mapping the brightest possible color to 1,1,1 and then by mapping the average to .5,.5,.5 for subsequent frames a feedback loop results that keeps the brightness adjusted properly.

By doing it every second or half second, you save processing time, but you also simulate the slowness of the human iris response when changing from bright to dark areas. For example, if you stared at the sun, then back at the ground, things would momentarily be too dark to see. You could make this shift gradual by interpolating between samples.

So, any ideas on how to quickly average the screen?

My first idea was to render a small version of the screen (say 160x120 = ~80k) and then download that to the CPU to be averaged. I would average R, G, B seperatly, then get an intensity value by averaging the average R, G, and B (R, G, B would be weighted property of course).

Then I thought that maybe the automatic mipmapping extension could be used. Would it be possible to render a screen and make it into a texture with automatic mipmapping without having to download the screen to the CPU? It seems like it would be faster, and I could retreive the 1x1 mip map as my average RGB.

Something tells me that downloading a 4 byte 1x1 texture will be the same speed as an 80k texture due to overhead.

Edit: I was imagining a bright white room, like say the staging room in The Matrix. In my system this room could never be completely white, but always 50%, because that would be the average. The problem is that as stated above, the system assumes that the intensity of the lights are just relative and arbitraury (it doesn’t matter if the sun is 1000 or 10000, just as long as something 100 times dimmer is 10 or 100).

So, to solve that there needs to be a threshold where the eye can no longer compensate and things start to overbright. There needs to be a low threshold as well, so that scenes can actually become black.

[This message has been edited by Nakoruru (edited 07-23-2002).]

just get a new radeon9700, use the floatingpointrange, and do an exposure pass at the end, where you manually set the shuttertime of the cam…

Just get a card that does not exist on the shelf yet? I guess I should switch on over to OpenGL 2.0 as well?

How useless an answer is this? I’ll just wait until 2010 when the graphics card will read my mind and show me exactly what I want to see.

In case you did not notice. I am trying to figure out how to make this work on DirectX8 level hardware. When my poor newlywed friend can pick up a GeForce 3 for less than 100 dollars at Wal-Mart to put it in his parent’s computer I am pretty certain that everyone on the planet will be able to run my app. But floating point frame buffers will not be mainstream for at least a year.

I understand what I want can probably be done super easy by DX9 level hardware. However, even then I will have to be able to compute the average value in the framebuffer so that I can set the camera exposure.

So then, your answer is worthless, because I am still left with the same problem even if I use Radeon 9700. How do I find out how bright the frame is so I can set the camera exposure to an aesthetic value? I figure you can take the average of the framebuffer and use that to get good results. How do I do that quickly?

(Sorry for the flames, I am just tired of hearing people suggest using things that are not available or cost hundreds or thousands of dollars)

don’t bitch on me i’m NOT in good mood.

a) the hardware is availible in about 4 weeks, b) i just wait for this hw because working with pixelshaders for more than just envbump, dx8 class hw is just pretty useless, and as i play around with the math of them since years, i know what i’m talkin (more or less)

if you want, suggest your little friend to buy a radeon8500 or a 9000, wich is cheaper. why? because there you have range up to [-8,+8] and you can have a dependend texture access before the second pass to exponenciate if you want.

with your 8bits per component you really don’t get far with the shading. all you can do is manually adjust your lightingcolors more or less. i’ll suggest the www.nutty.org glow demo, where you have more-than-one values glowing (and yes, i helped him for the math). but that stuff is pretty useless in your sence. in one year, yeah, a lot of people do have dx8+ hw. but not all of them expose the proprietary nvidia exts that are used there in. and those exts will get lossed over the time. so as long as there are no arb_fragment_shaders i would not try to get stuff working as you never know how much years you’ll have support for your stuff. if you code for next year ONLY. then use the features you have next year.

i’m not really willing to go through the math again to help you out now. i’m really in bad mood, sorry. life aint easy currently…

if you really wish to get help for this stuff, continue asking. if i’m in good mood, i can code you perpixellighting with perpixel normalized vectors and exp32 specular in one pass on geforce2. if not, i can’t even stop typing… i’ll stop now

Sorry, I am in a bad mood too. No hard feelings, I just do not want to hear ‘go out and spend 400 bucks 4 weeks from now’ :slight_smile:

My idea is to normalize on the front end instead of the back end, just as you said.
BUT, no matter which end I normalize on I my question is still valid.

I really just want to find the average of the value in the frame buffer. I want to know if there are any better ideas that I had.

It does not really matter if my framebuffer has 8 or 128 bits of precision. In order to figure out how to normalize down to 32-bit framebuffer and look good I need to know how bright the screen is overall.

Otherwise I go from an outside scene to the inside and everything gets too dark because the iris of my virtual eye did not grow to take in more light (camera exposure).

This will happen no matter how much precision I have. I need to be able to tell that it is too dark and change the exposure.

See, I should not have gone off on you about the Radeon, just about not realizing my question is one that is not answered merely by upgrading. In fact, it becomes more of a problem after upgrading.

it is as well not a problem after upgrading than before. but a) color range and b) color precision will affect it very much.

to sum up and average your pixels simply FILTER THEM.
how?
render scene on texture, bind texture, render it at quarter size with linear filtering, grab this onto texture, render it at quarter size again, totex etc till you only have one pixel left. that pixel then is the average of all your pixels. read back this pixel and you know the value.

with floatingpoints you have the power to simply store the full range, calc the average then, and scale afterwards your frame. dunno how you want to do it. but thats your problem

Would rendering 1/4 sized texture with linear filtering really implement a box filter? Somehow, I do not think so unless you already have mipmaps and use mipmap-linear. And if you already have mipmaps then just grab the 1x1 texture and forget all those steps in between.

I think what you suggest would just give me the pixel in the upper left… Hmm, I think it would work if you offset the texture coordinates by 1/2 texel. But you didn’t say that.

I just now realized that automipmapping would be a problem because the screen rarely has power of 2 dimesions. If I fit it to power of two dimensions then I have black on the top and bottom which will be part of my average. I guess that can be canceled out somehow, but I would lose some precision.

I said that it would be a more of a problem after upgrading because if I have a higher dynamic range in my shaders than I have in my framebuffer, I now have to worry about how to scale that dynamic range. This is a new ‘problem’ that did not exist before, a problem that everyone will have to solve, not just the people trying to do it in DX8 like me.

you can store the full dynamic range in the framebuffer… and reuse it. the framebuffer is 128bits floatingpoint if you want. and your textures as well…

sure you have to shift by half a texel. i just showed you the door, but you have to open it yourself. i’m not your dooropener…

I just wonder:
You want to get the average light from the framebuffer to set the overall scene lighting? Wouldn’t you have to do an entire second render of the scene to correctly scale all colors down? Sure, eyes don’t get used to lighting very quickly, but it still is a good idea to know about the total light.
Also, what about “over-bright” lights? If you got a super-effective lamp that goes far beyond the limits of the framebuffer, you won’t capture all the light info. I think you better find a way to calculate an average value differently.
Just a few questions and thoughts I had in mind…

The backbuffer is 128-bits, and I was assuming that all along, but I doubt that Windows will suddenly have a 128-bit mode. That means that when you do a SwapBuffers that it will have to be scaled to 32 bit integers. If you want it to be scaled correctly then you will need to normalize your framebuffer somehow before the flip.

Maybe there is an imager shader or a camera exposure in DX9 just like in RenderMan. I dunno.

If you know moreDo you know how it works?

Also, I am concerned about 128-bit floating point. Do any compilers support such a beast even in an emulated fashion? I thought that doubles on x86 where just 64-bits.

I know I could just load a 64-bit texture and the driver will translate it, but how do I provide a 128-bit one directly. Or work with one I download.

I was kinda surprised to hear that we will have 128-bits instead of just 64. I thought 64 was enough to do almost anything you wanted.

Will I need to write a float128 class in C++?

As for opening doors for me, I was just telling you that you left out an important detail, not asking you to bottle feed me (and why are we still fighting?)

hm… you got me wrong. a 128bit floatingpoint COLOR BUFFER means [rgba] has size 128bit… grmbl

and as thats all just parts of the gpu, the driver and the gpu handles how to work with it. windows doesn’t care (and no, windows will not support that color resolution in opengl, but its still at gl1.1 anyways…)

currently we have 32bit color with 8bits per component, fixed point [0,1] in the framebuffer. soon we can have 128bit color with 32bits per component, floatingpoint IEEE standard. and you can use it multipass, and you can swap it in the buffers. no problems… (the gpu has to send the data to screen, it has to care how to send the data of the framebuffer…)

Nakoruru, your “virtual iris” is interesting. I hope you will be able to implement it on current hardware.
But your method seems too costly to me. It might be better if you can determine the approximate average screen color by determining the objects visible. Since your engine will have to do the visibility anyway you can average the colors based on the light intensity, the object’s reflectivity and its size. This will not at all be accurate but it could be a good approximation.

Originally posted by Nakoruru:
[b]I was thinking about what would be the fastest way to get the average color of the screen. I would use it to normalize light intensity to prevent clamping at 1. This would simulating how the iris deals with the vastly different intensities of light between say, outside at noon, and a room lit by candles.

I would give my lights a pure color value (100 percent saturation), and then have a seperate intensity value. The sun would probably be 1000 while a candle would be around 10.

An example of how this would look. If I had a candle inside a building the engine would scale its intensity so that it would effect the walls. But if I went outside, its intensity would almost scale to 0, resulting in the expected behavior of a flashlight not having any visible effect during the day.

Lets say I had 3 lights in a scene. One light has an intensity of 100, while the other two have an intensity of 50. So, if they all contributed 100 percent then a surface would reflect an intensity of 200. If I wanted to have this entire dynamic range then I would map 200 to RGB = 1,1,1

However, it is not very likely that all these lights will shine all on one surface at full intensity, resulting in a very dim scene if I mapped the brightest possiblity to RGB=1,1,1. So I am guessing that I probably want to find the average intensity on the screen and then map that to RGB = .5,.5,.5

I think that the first frame would need to be rendered mapping the brightest possible color to 1,1,1 and then by mapping the average to .5,.5,.5 for subsequent frames a feedback loop results that keeps the brightness adjusted properly.

By doing it every second or half second, you save processing time, but you also simulate the slowness of the human iris response when changing from bright to dark areas. For example, if you stared at the sun, then back at the ground, things would momentarily be too dark to see. You could make this shift gradual by interpolating between samples.

So, any ideas on how to quickly average the screen?

My first idea was to render a small version of the screen (say 160x120 = ~80k) and then download that to the CPU to be averaged. I would average R, G, B seperatly, then get an intensity value by averaging the average R, G, and B (R, G, B would be weighted property of course).

Then I thought that maybe the automatic mipmapping extension could be used. Would it be possible to render a screen and make it into a texture with automatic mipmapping without having to download the screen to the CPU? It seems like it would be faster, and I could retreive the 1x1 mip map as my average RGB.

Something tells me that downloading a 4 byte 1x1 texture will be the same speed as an 80k texture due to overhead.

Edit: I was imagining a bright white room, like say the staging room in The Matrix. In my system this room could never be completely white, but always 50%, because that would be the average. The problem is that as stated above, the system assumes that the intensity of the lights are just relative and arbitraury (it doesn’t matter if the sun is 1000 or 10000, just as long as something 100 times dimmer is 10 or 100).

So, to solve that there needs to be a threshold where the eye can no longer compensate and things start to overbright. There needs to be a low threshold as well, so that scenes can actually become black.

[This message has been edited by Nakoruru (edited 07-23-2002).][/b]

this is just an idea that occured to me a moment ago. if you had a frame buffer that stored 32bit floats in the range [0, 2] this would provide the high range you’re looking for. by clamping the final light intensity to [0, 1] you would have your high threshold point for over bright. the trick is to represent this 32bit float in an RGBA format. you could do it something like this:

  • calculate light intensity in a pixel shader. the intensity will be in the range [0, 2] you then scale this intensity by 1/2 to get it to the range [0,1].

  • use your scaled intensity as a texture coordinate into a 1d texture that returns a RGBA texel holding your scaled intensity (first 8 bits in R, second 8 bits in G… so on).

  • this RGBA color is written to the frame buffer.

  • the contents of the frame buffer can then be bound as a texture and to get your intensity value back, you read back in the RGBA value and perform a dp4 with the RGBA and this vector: <1, 1/(2^8), 1/(2^16), 1/(2^24)>. the result of this dp4 is your original scaled intensity value. scale by 2 and you are back in the [0, 2] range.

  • clamp the intsity value to [0,1] and multiply by color.

just a thought… i have no idea if it will actually work

–chris

[This message has been edited by chrisATI (edited 07-24-2002).]

Okay, I completely misunderstood when they said 128-bit floating point. I should have realized it was 4 32-bit floating point values. As you can see, Sometimes I am of suseptible to not properly correcting for incomplete explanations (although most of the time I am pretty good at recognizing slight errors).

I am unsure if the front buffer will be 128-bit. Are you absolutely sure? It seems like a waste of memory. OpenGL has never guaranteed that you can read the contents of the front buffer, so there is no reason why it has to match the backbuffer. I am pretty sure that the RAMDAC and frontbuffer will remain 32-bit, there is no real gain, and you would eat up 4 times more memory bandwidth just refreshing the screen.

chrisATI,

I think that Quake 3 does something like what you decribe. All of its light maps are calculated in the range 0-2. They store this as 0-1 in the lightmaps, then get that range back by modifying the gamma so that 0-.5 in the lightmap is 0-1 on the screen and everything brighter causes an overbrighting effect.

What I am looking for is to figure out the overall intensity of a frame and then scale the lights so that they are likely to fall between 0 and 1 when added together.

There is a danger of both totally black and totally white scenes becoming the same medium grey if this system is allowed to conpensate infinitely, so like the human eye it should have limits on the top and low end.

My rough calculations of the size change of the human iris says that it can vary in size by a factor of 100. I guess that is why both the sun and a nightlight can cause spots on one’s eyes. Why one can hardly tell if a street lamp is on in the day. Even just opening the curtains is enough to make the contribution of a lightbulb tiny.

no one talks about the frontbuffer. you can render full floatingpoint, thats all… oh, and you can read from teh frontbuffer, its defined that you can. so it should be there as well. how about refering to the existing documents instead of bitching that it sounds impossible, or that you understand things wrong. www.ati.com has yet docu out that explains it. even going into dx9 is very helpful as you there see the minimum requirements for all future hardware. and dx9 demands that you can render into 4 buffers simultaneus, and that they can be 32bit floatingpoint.

i think the approach i gave you is the best you can do (the most hw-accelerated one). you have to render the scene twice (or use the previous frame)…

approach with previous frame:

render scene with some predicted base light (our eye needs time to adjust anyways).
copy to tex
show frame
scaledown tex till its one pixel recursively
read that color out.
calculate wich lights are then important for the current scene, enable those

render scene with lights
copy to tex
show frame
scaledown tex
read color
calc lights

etc…

if you would have enough precision to store the up to 10000 values, you could as well use all the lights, and simply use the 1x1 texel big texture in the texshader to scale them all… but thats for floatingpointhw

for your hw, for realy changing lights, thats about the way…

Your solution sounds in line with what I was thinking. So it is the sanity check I was looking for, so thanks.

As for no one talking about the frontbuffer, -I- was talking about the frontbuffer.

I understand about the backbuffers being 128-bit. But that can be pretty independent of the frontbuffer.

OpenGL says that you can read from the frontbuffer, but that its undefined what you will get.

Anyway, whether or not I misunderstood, and whether or not the frontbuffer is 32 or 128 bit, the problem with such high dynamic range is still that we will have to have a way of adjusting the exposure (as renderman puts it or the ‘virtual iris’ as I put it)
in order to truly take advantage of the floating point precision. Otherwise we are just getting rid of banding and not truly simulating that there is a huge difference between the darkest and brightest things we can see, but that our perception does not really allow us to tell the difference between the white paper inside an office and the white of a piece of paper outside at noon (which is probably 10 times brighter, but perceived almost the same way).

In most graphics apps, you would just make the sun-lit paper and the bulb-lit paper about the same brightness value. When you do this it does not matter if you have 32 or 8 bits of color precision. In this situation, the only thing that the extra precision gives you is less banding.

However, if you actually simulate that the bulb-lit paper is 1/10 as bright, then if you do not have some way of scaling this up then the inside is going to be much too dim.

All the arguing over the exact functionality of DX9 framebuffers has little to do with any of this, and I regret that it even came up. I’ll just RTFM before sharing my intuitions next time…

EDIT: After RTFM it appears that DX9 does not require a 32-bit frontbuffer and that the new 128-bit formats are for internal use (like backbuffers and textures). This is exactly what I was explaining. This leaves me wondering what I got called a bitch for. Pot, meet Kettle. Oh well, I don’t care. If we had talked about this in person I’m sure there would have been no misunderstanding.

[This message has been edited by Nakoruru (edited 07-24-2002).]

dunno what you want… all my statements are true, and my suggestion to not try too much on the currend hw as your solution will not work for next gen and will be proprietary has to do with it. i mean don’t put too much effort in this as your solution is simply bad for all except about 3 cards.

anyways, i think i gave you the answer yet how you can do it quite detailed, and i’ve explained you how the 128bit floatingpointbuffers will help for this. the whole thing is called image post processing, and doing the exposure function 1-exp(-k*color) is a simple post process on dx9 compliant hw.

i as well explained you how you get your value back, so what… get your value, and adjust your lights…

oh, and btw, i prefer to use the high color range to simply show the whole range so that all the things are good visible, instead of changing to dark and to bright… more of our eyes than our cam (i know our eyes do change, and this little change from a cave to the outside will be cool… but for this you can simply fade out a white quad when you go out, and a black one when you go in… seen that several times yet )

any more questions? i’m willing to help, even if it doesn’t sound like that sometimes and i’m interested in the topic, as well…

I’m actually going to work on some code to try out some of this. If it works out well I’ll probably post it here.

I think most of what I said was true too, it just sounds like we are contradicting each other when we are not. It did seem like you said windows will have a 128-bit color mode, but I’m not even going to bother reading back just to be right/wrong/whatever.

The cave is a good example of dynamic range, but its sort of a special situation that is good for hacks like just adjusting the brightness, but it does not capture a harder situation, like say, looking into a forest from a clearing.

In the clearing you would want to lower your exposure so that you can make out the details, but when you look into the forest anything in it will be heavily shadowed and details would be hard to make out. Go into the forest and you can make out everything, but when you look back to the clearing it will be overly bright and washed out.

If you use the full dynamic range, then you will not have this effect, but this is how photography and our eyes work, so it would be realistic to adjust the exposure dynamically.

You keep suggesting that exposure is easy on DX9 hardware, and I do not dispute that, but you do not explain how you would determine what to set your exposure to, which is what I am exploring.

I understand that adjusting the exposure is easy on DX9. If you want to know what I want, it would be good ways to guess what a good exposure value should be.

My thought was getting the average brightness of the screen. I think you are saying thats a bad solution. Do you say this because it would be too slow, or that you think I should not dynamically adjust the exposure.

Someone else suggested that I assign some sort of reflectance value to each object in my scene, and somehow guess at how bright the scene will be based on that whether to object is visible and how big it will be.

Another idea would be to put bounding boxes around different areas and then have artists adjust the exposure in each box so that it looks right. Just make it like some games which use bounding boxes to control the camera, just add exposure to the list of paramters.

This dynamic exposure problem is not a problem for RenderMan because you can just script everything and get it perfect. But, since we are doing dynamic interactive graphics with OpenGL2/DX9, we either have to make all our scene lighting the same intensity over the entire scene graph which means the forest looks just as dark whether you are in it, or looking at it from the meadow (which is not the way cameras or people see things) or we have to figure out a way to adjust the exposure on the fly.

I guess I’ve explained this about 4 times already :slight_smile:

If you do not see the need, then I guess I cannot convince you.

Anyone know how the auto-exposure on a camcorder works?