PDA

View Full Version : Dynamic Range

Nakoruru
07-23-2002, 07:27 AM
I was thinking about what would be the fastest way to get the average color of the screen. I would use it to normalize light intensity to prevent clamping at 1. This would simulating how the iris deals with the vastly different intensities of light between say, outside at noon, and a room lit by candles.

I would give my lights a pure color value (100 percent saturation), and then have a seperate intensity value. The sun would probably be 1000 while a candle would be around 10.

An example of how this would look. If I had a candle inside a building the engine would scale its intensity so that it would effect the walls. But if I went outside, its intensity would almost scale to 0, resulting in the expected behavior of a flashlight not having any visible effect during the day.

Lets say I had 3 lights in a scene. One light has an intensity of 100, while the other two have an intensity of 50. So, if they all contributed 100 percent then a surface would reflect an intensity of 200. If I wanted to have this entire dynamic range then I would map 200 to RGB = 1,1,1

However, it is not very likely that all these lights will shine all on one surface at full intensity, resulting in a very dim scene if I mapped the brightest possiblity to RGB=1,1,1. So I am guessing that I probably want to find the average intensity on the screen and then map that to RGB = .5,.5,.5

I think that the first frame would need to be rendered mapping the brightest possible color to 1,1,1 and then by mapping the average to .5,.5,.5 for subsequent frames a feedback loop results that keeps the brightness adjusted properly.

By doing it every second or half second, you save processing time, but you also simulate the slowness of the human iris response when changing from bright to dark areas. For example, if you stared at the sun, then back at the ground, things would momentarily be too dark to see. You could make this shift gradual by interpolating between samples.

So, any ideas on how to quickly average the screen?

My first idea was to render a small version of the screen (say 160x120 = ~80k) and then download that to the CPU to be averaged. I would average R, G, B seperatly, then get an intensity value by averaging the average R, G, and B (R, G, B would be weighted property of course).

Then I thought that maybe the automatic mipmapping extension could be used. Would it be possible to render a screen and make it into a texture with automatic mipmapping without having to download the screen to the CPU? It seems like it would be faster, and I could retreive the 1x1 mip map as my average RGB.

Edit: I was imagining a bright white room, like say the staging room in The Matrix. In my system this room could never be completely white, but always 50%, because that would be the average. The problem is that as stated above, the system assumes that the intensity of the lights are just relative and arbitraury (it doesn't matter if the sun is 1000 or 10000, just as long as something 100 times dimmer is 10 or 100).

So, to solve that there needs to be a threshold where the eye can no longer compensate and things start to overbright. There needs to be a low threshold as well, so that scenes can actually become black.

[This message has been edited by Nakoruru (edited 07-23-2002).]

davepermen
07-23-2002, 10:09 AM
just get a new radeon9700, use the floatingpointrange, and do an exposure pass at the end, where you manually set the shuttertime of the cam..

Nakoruru
07-23-2002, 10:57 AM
Just get a card that does not exist on the shelf yet? I guess I should switch on over to OpenGL 2.0 as well?

How useless an answer is this? I'll just wait until 2010 when the graphics card will read my mind and show me exactly what I want to see.

In case you did not notice. I am trying to figure out how to make this work on DirectX8 level hardware. When my poor newlywed friend can pick up a GeForce 3 for less than 100 dollars at Wal-Mart to put it in his parent's computer I am pretty certain that everyone on the planet will be able to run my app. But floating point frame buffers will not be mainstream for at least a year.

I understand what I want can probably be done super easy by DX9 level hardware. However, even then I will have to be able to compute the average value in the framebuffer so that I can set the camera exposure.

So then, your answer is worthless, because I am still left with the same problem even if I use Radeon 9700. How do I find out how bright the frame is so I can set the camera exposure to an aesthetic value? I figure you can take the average of the framebuffer and use that to get good results. How do I do that quickly?

(Sorry for the flames, I am just tired of hearing people suggest using things that are not available or cost hundreds or thousands of dollars)

davepermen
07-23-2002, 11:25 AM
don't bitch on me i'm _NOT_ in good mood.

a) the hardware is availible in about 4 weeks, b) i just wait for this hw because working with pixelshaders for more than just envbump, dx8 class hw is just pretty useless, and as i play around with the math of them since years, i know what i'm talkin (more or less)

if you want, suggest your little friend to buy a radeon8500 or a 9000, wich is cheaper. why? because there you have range up to [-8,+8] and you can have a dependend texture access before the second pass to exponenciate if you want.

with your 8bits per component you really don't get far with the shading. all you can do is manually adjust your lightingcolors more or less. i'll suggest the www.nutty.org (http://www.nutty.org) glow demo, where you have more-than-one values glowing (and yes, i helped him for the math). but that stuff is pretty useless in your sence. in one year, yeah, a lot of people do have dx8+ hw. but not all of them expose the proprietary nvidia exts that are used there in. and those exts will get lossed over the time. so as long as there are no arb_fragment_shaders i would not try to get stuff working as you never know how much years you'll have support for your stuff. if you code for next year ONLY. then use the features you have next year.

i'm not really willing to go through the math again to help you out now. i'm really in bad mood, sorry. life aint easy currently..

if you really wish to get help for this stuff, continue asking. if i'm in good mood, i can code you perpixellighting with perpixel normalized vectors and exp32 specular in one pass on geforce2. if not, i can't even stop typing.. i'll stop now http://www.opengl.org/discussion_boards/ubb/wink.gif

Nakoruru
07-23-2002, 11:39 AM
Sorry, I am in a bad mood too. No hard feelings, I just do not want to hear 'go out and spend 400 bucks 4 weeks from now' ^_^

My idea is to normalize on the front end instead of the back end, just as you said.
BUT, no matter which end I normalize on I my question is still valid.

I really just want to find the average of the value in the frame buffer. I want to know if there are any better ideas that I had.

It does not really matter if my framebuffer has 8 or 128 bits of precision. In order to figure out how to normalize down to 32-bit framebuffer and look good I need to know how bright the screen is overall.

Otherwise I go from an outside scene to the inside and everything gets too dark because the iris of my virtual eye did not grow to take in more light (camera exposure).

This will happen no matter how much precision I have. I need to be able to tell that it is too dark and change the exposure.

See, I should not have gone off on you about the Radeon, just about not realizing my question is one that is not answered merely by upgrading. In fact, it becomes more of a problem after upgrading.

davepermen
07-23-2002, 11:47 AM
it is as well not a problem after upgrading than before. but a) color range and b) color precision will affect it very much.

to sum up and average your pixels simply FILTER THEM.
how?
render scene on texture, bind texture, render it at quarter size with linear filtering, grab this onto texture, render it at quarter size again, totex etc till you only have one pixel left. that pixel then is the average of all your pixels. read back this pixel and you know the value.

with floatingpoints you have the power to simply store the full range, calc the average then, and scale afterwards your frame. dunno how you want to do it. but thats your problem http://www.opengl.org/discussion_boards/ubb/wink.gif

Nakoruru
07-23-2002, 12:17 PM
Would rendering 1/4 sized texture with linear filtering really implement a box filter? Somehow, I do not think so unless you already have mipmaps and use mipmap-linear. And if you already have mipmaps then just grab the 1x1 texture and forget all those steps in between.

I think what you suggest would just give me the pixel in the upper left... Hmm, I think it would work if you offset the texture coordinates by 1/2 texel. But you didn't say that.

I just now realized that automipmapping would be a problem because the screen rarely has power of 2 dimesions. If I fit it to power of two dimensions then I have black on the top and bottom which will be part of my average. I guess that can be canceled out somehow, but I would lose some precision.

I said that it would be a more of a problem after upgrading because if I have a higher dynamic range in my shaders than I have in my framebuffer, I now have to worry about how to scale that dynamic range. This is a new 'problem' that did not exist before, a problem that everyone will have to solve, not just the people trying to do it in DX8 like me.

davepermen
07-23-2002, 12:20 PM
you can store the full dynamic range in the framebuffer.. and reuse it. the framebuffer is 128bits floatingpoint if you want. and your textures as well..

sure you have to shift by half a texel. i just showed you the door, but you have to open it yourself. i'm not your dooropener..

Nakoruru
07-23-2002, 12:48 PM
The backbuffer is 128-bits, and I was assuming that all along, but I doubt that Windows will suddenly have a 128-bit mode. That means that when you do a SwapBuffers that it will have to be scaled to 32 bit integers. If you want it to be scaled correctly then you will need to normalize your framebuffer somehow before the flip.

Maybe there is an imager shader or a camera exposure in DX9 just like in RenderMan. I dunno.

If you know moreDo you know how it works?

Also, I am concerned about 128-bit floating point. Do any compilers support such a beast even in an emulated fashion? I thought that doubles on x86 where just 64-bits.

I know I could just load a 64-bit texture and the driver will translate it, but how do I provide a 128-bit one directly. Or work with one I download.

I was kinda surprised to hear that we will have 128-bits instead of just 64. I thought 64 was enough to do almost anything you wanted.

Will I need to write a float128 class in C++?

As for opening doors for me, I was just telling you that you left out an important detail, not asking you to bottle feed me (and why are we still fighting?)

coelurus
07-23-2002, 12:48 PM
I just wonder:
You want to get the average light from the framebuffer to set the overall scene lighting? Wouldn't you have to do an entire second render of the scene to correctly scale all colors down? Sure, eyes don't get used to lighting very quickly, but it still is a good idea to know about the total light.
Also, what about "over-bright" lights? If you got a super-effective lamp that goes far beyond the limits of the framebuffer, you won't capture all the light info. I think you better find a way to calculate an average value differently.
Just a few questions and thoughts I had in mind...

davepermen
07-23-2002, 09:08 PM
hm.. you got me wrong. a 128bit floatingpoint COLOR BUFFER means [rgba] has size 128bit.. grmbl http://www.opengl.org/discussion_boards/ubb/wink.gif

and as thats all just parts of the gpu, the driver and the gpu handles how to work with it. windows doesn't care (and no, windows will not support that color resolution in opengl, but its still at gl1.1 anyways..)

currently we have 32bit color with 8bits per component, fixed point [0,1] in the framebuffer. soon we can have 128bit color with 32bits per component, floatingpoint IEEE standard. and you can use it multipass, and you can swap it in the buffers. no problems.. http://www.opengl.org/discussion_boards/ubb/smile.gif (the gpu has to send the data to screen, it has to care how to send the data of the framebuffer..)

chrisATI
07-23-2002, 11:42 PM
Originally posted by Nakoruru:
I was thinking about what would be the fastest way to get the average color of the screen. I would use it to normalize light intensity to prevent clamping at 1. This would simulating how the iris deals with the vastly different intensities of light between say, outside at noon, and a room lit by candles.

I would give my lights a pure color value (100 percent saturation), and then have a seperate intensity value. The sun would probably be 1000 while a candle would be around 10.

An example of how this would look. If I had a candle inside a building the engine would scale its intensity so that it would effect the walls. But if I went outside, its intensity would almost scale to 0, resulting in the expected behavior of a flashlight not having any visible effect during the day.

Lets say I had 3 lights in a scene. One light has an intensity of 100, while the other two have an intensity of 50. So, if they all contributed 100 percent then a surface would reflect an intensity of 200. If I wanted to have this entire dynamic range then I would map 200 to RGB = 1,1,1

However, it is not very likely that all these lights will shine all on one surface at full intensity, resulting in a very dim scene if I mapped the brightest possiblity to RGB=1,1,1. So I am guessing that I probably want to find the average intensity on the screen and then map that to RGB = .5,.5,.5

I think that the first frame would need to be rendered mapping the brightest possible color to 1,1,1 and then by mapping the average to .5,.5,.5 for subsequent frames a feedback loop results that keeps the brightness adjusted properly.

By doing it every second or half second, you save processing time, but you also simulate the slowness of the human iris response when changing from bright to dark areas. For example, if you stared at the sun, then back at the ground, things would momentarily be too dark to see. You could make this shift gradual by interpolating between samples.

So, any ideas on how to quickly average the screen?

My first idea was to render a small version of the screen (say 160x120 = ~80k) and then download that to the CPU to be averaged. I would average R, G, B seperatly, then get an intensity value by averaging the average R, G, and B (R, G, B would be weighted property of course).

Then I thought that maybe the automatic mipmapping extension could be used. Would it be possible to render a screen and make it into a texture with automatic mipmapping without having to download the screen to the CPU? It seems like it would be faster, and I could retreive the 1x1 mip map as my average RGB.

Edit: I was imagining a bright white room, like say the staging room in The Matrix. In my system this room could never be completely white, but always 50%, because that would be the average. The problem is that as stated above, the system assumes that the intensity of the lights are just relative and arbitraury (it doesn't matter if the sun is 1000 or 10000, just as long as something 100 times dimmer is 10 or 100).

So, to solve that there needs to be a threshold where the eye can no longer compensate and things start to overbright. There needs to be a low threshold as well, so that scenes can actually become black.

[This message has been edited by Nakoruru (edited 07-23-2002).]

this is just an idea that occured to me a moment ago. if you had a frame buffer that stored 32bit floats in the range [0, 2] this would provide the high range you're looking for. by clamping the final light intensity to [0, 1] you would have your high threshold point for over bright. the trick is to represent this 32bit float in an RGBA format. you could do it something like this:

- calculate light intensity in a pixel shader. the intensity will be in the range [0, 2] you then scale this intensity by 1/2 to get it to the range [0,1].

- use your scaled intensity as a texture coordinate into a 1d texture that returns a RGBA texel holding your scaled intensity (first 8 bits in R, second 8 bits in G... so on).

- this RGBA color is written to the frame buffer.

- the contents of the frame buffer can then be bound as a texture and to get your intensity value back, you read back in the RGBA value and perform a dp4 with the RGBA and this vector: <1, 1/(2^8), 1/(2^16), 1/(2^24)>. the result of this dp4 is your original scaled intensity value. scale by 2 and you are back in the [0, 2] range.

- clamp the intsity value to [0,1] and multiply by color.

just a thought... i have no idea if it will actually work http://www.opengl.org/discussion_boards/ubb/smile.gif

--chris

[This message has been edited by chrisATI (edited 07-24-2002).]

tarantula
07-23-2002, 11:42 PM
Nakoruru, your "virtual iris" is interesting. I hope you will be able to implement it on current hardware.
But your method seems too costly to me. It might be better if you can determine the approximate average screen color by determining the objects visible. Since your engine will have to do the visibility anyway you can average the colors based on the light intensity, the object's reflectivity and its size. This will not at all be accurate but it could be a good approximation.

Nakoruru
07-24-2002, 04:38 AM
Okay, I completely misunderstood when they said 128-bit floating point. I should have realized it was 4 32-bit floating point values. As you can see, Sometimes I am of suseptible to not properly correcting for incomplete explanations (although most of the time I am pretty good at recognizing slight errors).

I am unsure if the front buffer will be 128-bit. Are you absolutely sure? It seems like a waste of memory. OpenGL has never guaranteed that you can read the contents of the front buffer, so there is no reason why it has to match the backbuffer. I am pretty sure that the RAMDAC and frontbuffer will remain 32-bit, there is no real gain, and you would eat up 4 times more memory bandwidth just refreshing the screen.

Nakoruru
07-24-2002, 04:57 AM
chrisATI,

I think that Quake 3 does something like what you decribe. All of its light maps are calculated in the range 0-2. They store this as 0-1 in the lightmaps, then get that range back by modifying the gamma so that 0-.5 in the lightmap is 0-1 on the screen and everything brighter causes an overbrighting effect.

What I am looking for is to figure out the overall intensity of a frame and then scale the lights so that they are likely to fall between 0 and 1 when added together.

There is a danger of both totally black and totally white scenes becoming the same medium grey if this system is allowed to conpensate infinitely, so like the human eye it should have limits on the top and low end.

My rough calculations of the size change of the human iris says that it can vary in size by a factor of 100. I guess that is why both the sun and a nightlight can cause spots on one's eyes. Why one can hardly tell if a street lamp is on in the day. Even just opening the curtains is enough to make the contribution of a lightbulb tiny.

davepermen
07-24-2002, 05:13 AM
no one talks about the frontbuffer. you can render full floatingpoint, thats all.. oh, and you can read from teh frontbuffer, its defined that you can. so it should be there as well. how about refering to the existing documents instead of bitching that it sounds impossible, or that you understand things wrong. www.ati.com (http://www.ati.com) has yet docu out that explains it. even going into dx9 is very helpful as you there see the minimum requirements for all future hardware. and dx9 demands that you can render into 4 buffers simultaneus, and that they can be 32bit floatingpoint.

davepermen
07-24-2002, 05:25 AM
i think the approach i gave you is the best you can do (the most hw-accelerated one). you have to render the scene twice (or use the previous frame)..

approach with previous frame:

render scene with some predicted base light (our eye needs time to adjust anyways).
copy to tex
show frame
scaledown tex till its one pixel recursively
calculate wich lights are then important for the current scene, enable those

render scene with lights
copy to tex
show frame
scaledown tex
calc lights

etc..

if you would have enough precision to store the up to 10000 values, you could as well use all the lights, and simply use the 1x1 texel big texture in the texshader to scale them all.. but thats for floatingpointhw http://www.opengl.org/discussion_boards/ubb/wink.gif

Nakoruru
07-24-2002, 06:25 AM
Your solution sounds in line with what I was thinking. So it is the sanity check I was looking for, so thanks.

As for no one talking about the frontbuffer, -I- was talking about the frontbuffer.

I understand about the backbuffers being 128-bit. But that can be pretty independent of the frontbuffer.

OpenGL says that you can read from the frontbuffer, but that its undefined what you will get.

Anyway, whether or not I misunderstood, and whether or not the frontbuffer is 32 or 128 bit, the problem with such high dynamic range is still that we will have to have a way of adjusting the exposure (as renderman puts it or the 'virtual iris' as I put it)
in order to truly take advantage of the floating point precision. Otherwise we are just getting rid of banding and not truly simulating that there is a huge difference between the darkest and brightest things we can see, but that our perception does not really allow us to tell the difference between the white paper inside an office and the white of a piece of paper outside at noon (which is probably 10 times brighter, but perceived almost the same way).

In most graphics apps, you would just make the sun-lit paper and the bulb-lit paper about the same brightness value. When you do this it does not matter if you have 32 or 8 bits of color precision. In this situation, the only thing that the extra precision gives you is less banding.

However, if you actually simulate that the bulb-lit paper is 1/10 as bright, then if you do not have some way of scaling this up then the inside is going to be much too dim.

All the arguing over the exact functionality of DX9 framebuffers has little to do with any of this, and I regret that it even came up. I'll just RTFM before sharing my intuitions next time...

EDIT: After RTFM it appears that DX9 does not require a 32-bit frontbuffer and that the new 128-bit formats are for internal use (like backbuffers and textures). This is exactly what I was explaining. This leaves me wondering what I got called a bitch for. Pot, meet Kettle. Oh well, I don't care. If we had talked about this in person I'm sure there would have been no misunderstanding.

[This message has been edited by Nakoruru (edited 07-24-2002).]

davepermen
07-24-2002, 08:11 AM
dunno what you want.. all my statements are true, and my suggestion to not try too much on the currend hw as your solution will not work for next gen and will be proprietary has to do with it. i mean don't put too much effort in this as your solution is simply bad for all except about 3 cards.

anyways, i think i gave you the answer yet how you can do it quite detailed, and i've explained you how the 128bit floatingpointbuffers will help for this. the whole thing is called image post processing, and doing the exposure function 1-exp(-k*color) is a simple post process on dx9 compliant hw.

oh, and btw, i prefer to use the high color range to simply show the whole range so that all the things are good visible, instead of changing to dark and to bright.. more of our eyes than our cam (i know our eyes do change, and this little change from a cave to the outside will be cool.. but for this you can simply fade out a white quad when you go out, and a black one when you go in.. seen that several times yet http://www.opengl.org/discussion_boards/ubb/wink.gif)

any more questions? i'm willing to help, even if it doesn't sound like that sometimes http://www.opengl.org/discussion_boards/ubb/wink.gif and i'm interested in the topic, as well..

Nakoruru
07-24-2002, 10:05 AM
I'm actually going to work on some code to try out some of this. If it works out well I'll probably post it here.

I think most of what I said was true too, it just sounds like we are contradicting each other when we are not. It did seem like you said windows will have a 128-bit color mode, but I'm not even going to bother reading back just to be right/wrong/whatever.

The cave is a good example of dynamic range, but its sort of a special situation that is good for hacks like just adjusting the brightness, but it does not capture a harder situation, like say, looking into a forest from a clearing.

In the clearing you would want to lower your exposure so that you can make out the details, but when you look into the forest anything in it will be heavily shadowed and details would be hard to make out. Go into the forest and you can make out everything, but when you look back to the clearing it will be overly bright and washed out.

If you use the full dynamic range, then you will not have this effect, but this is how photography and our eyes work, so it would be realistic to adjust the exposure dynamically.

You keep suggesting that exposure is easy on DX9 hardware, and I do not dispute that, but you do not explain how you would determine what to set your exposure to, which is what I am exploring.

I understand that adjusting the exposure is easy on DX9. If you want to know what I want, it would be good ways to guess what a good exposure value should be.

My thought was getting the average brightness of the screen. I think you are saying thats a bad solution. Do you say this because it would be too slow, or that you think I should not dynamically adjust the exposure.

Someone else suggested that I assign some sort of reflectance value to each object in my scene, and somehow guess at how bright the scene will be based on that whether to object is visible and how big it will be.

Another idea would be to put bounding boxes around different areas and then have artists adjust the exposure in each box so that it looks right. Just make it like some games which use bounding boxes to control the camera, just add exposure to the list of paramters.

This dynamic exposure problem is not a problem for RenderMan because you can just script everything and get it perfect. But, since we are doing dynamic interactive graphics with OpenGL2/DX9, we either have to make all our scene lighting the same intensity over the entire scene graph which means the forest looks just as dark whether you are in it, or looking at it from the meadow (which is not the way cameras or people see things) or we have to figure out a way to adjust the exposure on the fly.

If you do not see the need, then I guess I cannot convince you.

Anyone know how the auto-exposure on a camcorder works?

davepermen
07-24-2002, 10:28 AM
so. now we have the point where you are wrong. and its in fact the topic http://www.opengl.org/discussion_boards/ubb/smile.gif
you are RIGHT with your idea how to get a value that helps to adjust the exposurefactor. i never stated thats a bad idea. i never stated thats a bad idea to sum up what you see, because thats how the eye does it.
you're not right as well that i did not provided you a solution to the problem. i gave you a solution, you just don't believe its one. but it works, i've tested this, nutty tested this. we used it for glowing / blurring of the screen. but all you need to know is, you can with recursive scaling down the texture get the average color. and this average color is a very good starting point to get your exposure-factor. for the rest, i'll suggest you take a piece of paper and a pen and draw some curves of exposurefunctions and try to find a way how to adjust the constant factor of it. how to do that i don't know, we can discuss this if you want (i bet you want to http://www.opengl.org/discussion_boards/ubb/wink.gif).
the suggestion of the other, summing up the specular visible objects, is good as well. in fact, what he suggests is doing the same i suggested, but doing it offline, in your scene, instead from your screen. why this? because a) you have floatingpoint there and b) well, thats about it.. http://www.opengl.org/discussion_boards/ubb/smile.gif

the suggestion of the other has the problem that it is not a perfect solution but more of a stochastical approach. a) he suggests to only use the bright objects, the specular hightighs in fact, b) you normally don't know exactly what is on your screen, and how big the speculars are, so how bright in the end they are..

problem of my approach is once again, old hardware. because you render the colordata wich gets yet clamped, you'll **** your result up, if you have too much values clamped down to 1. but you could a) use this as a feature, not as a bug or b) search another way..

how to use it as a feature:
say your sum-up is more than .5, that means half or your pixels will be 1 (possibly!). that means, about half of your pixels could be clamped. so dark the image a bit down. next time you do the same etc.. that way your eye needs time to adjust for big changes.. means the sum is say bigger.5 you make it darker, if its lower .1 make it more bright. that is a statistical approach wich, i bet, fits very good to the eye, because that don't know the values as well, just oh ****, there are a lot of cells on the max-brightness => close the lense. oh ****, most of the cells are not used at all =>open the lense. without knowing how much..

thats more text than i wanted to write http://www.opengl.org/discussion_boards/ubb/smile.gif

Nakoruru
07-24-2002, 11:26 AM
I must have not made myself clear (how does that not surprise me) I totally believe your solution about scaling down the texture. That is why I kept suggesting automipmapping as a solution.
Automipmapping would create that 1x1 texture just by loading the framebuffer as a texture (without the recursion). The only thing I pointed out was that you did not mention that you needed a half texel offset for the box filter to work.

So, what the hell were we ever arguing for? ^_^

Now, about clamping. One goal I have is to prevent clamping on limited range hardware. On hardware with limited precision and range, I was thinking of using the exposure function as a prescale on the lights. Dim all the lights so that they do not clamp too much. On high precision / high range hardware I would just use the light intensity values raw and scale down/up at the end.

Basic example, if I have two lights with intensity 10000 and 5000, I could scale the 10000 so that its .667 and the 5000 so that its .33, so where they both add up on a white surface it ends up being 1

This naive approach will make the scene too dark, so I'll use the average brightness of a previously rendered frame to find out about how bright the scene actually is (any one of the methods mentioned should work to some degree to provide this number), and use that to scale it. Because I am using the average and not the max, they can still add up past 1. But I see this as a feature (like you said).

So, because of this, range is no longer a problem, just precision.

I saw Nutty's glare demo, its really cool, and one of the reasons I am interested in this (besides the double-bright gamma trick in Quake 3, and a hint that Doom 3 handles this problem in some fashion).

I am going to test rendering the scene to a 128x128 texture, then downloading the 1x1 auto generated mipmap. I have never used auto-mipmapping before, so there is probably some big gotcha, like you cannot access the generated mip-maps. If not, I would need a fall-back anyway, so I'll use your solution. Hopefully it would be fast enough not to cause a hitch every half second. A 64x64 may even be large enough to give a good idea of how bright the scene is.

I should be able to bang something out this weekend.

davepermen
07-24-2002, 11:49 AM
automipmaping always does pointsampling on my gpu with newest drivers, even while nvidia is stating thats not true..

and the .5pixel shift is a) logical and b) an excercise for you, i said that before http://www.opengl.org/discussion_boards/ubb/wink.gif

>>My thought was getting the average brightness of the screen. I think you are saying thats a bad solution.<<
hm.. that statement told me you did not understood that i supported your way http://www.opengl.org/discussion_boards/ubb/smile.gif

>>Anyone know how the auto-exposure on a camcorder works?<<
i guess the way i said. if there are too much fullwhite pixels it closes, if there are too much fullblack pixels, it opens up. thats quite easy, just do some statistics about this..

i'm not sure how your way works exactly (the measuring and the reacting on the measure), but best is you try it yourself.

hope to see something fancy, but somehow i doubt it (colors on 32bitrgba systems are simply not cool http://www.opengl.org/discussion_boards/ubb/smile.gif q3 is an exception there yes..)

Nakoruru
07-24-2002, 01:54 PM
Okay, I completely agreed with your method of finding an average, but just because you provide a solution to someones problem, doesn't mean you agree that the problem they are solving needs to be solved ^_^ Just a misunderstanding.

I will code my test program in such a way that I can easily try many different ways to calculate the exposure.

Counting the number of 'pegged' and 'floored' CCD elements sounds intuitively like the way a piece of hardware would work. I'll try that, along with averaging and finding the max and min. I'll throw in normalizing all the light values, and a manual exposure control to round things out.

Can't wait to port it all to DX9 class hardware, where it should look a bit better.

Nakoruru
07-24-2002, 05:09 PM
I have been using the term 'virtual iris' and 'exposure' interchangably, but I now think that they may be different things.

Exposure represents the logarithmic response of film, CCDs, an the cells in the eye to light. That means that even if the sun is 1000 times brighter than candle, we may only perceive it as being 3 or 5 times brighter. Exactly like how a sound has to be about 10 times more intense before we will say that it is twice as loud.

The virtual iris is different, it represents how much light is let into the camera/eye and it scales brightness linearly.

So, you scale the intensity of the light linearly to represent the iris, then logarithmically to represent the response to light, then you clamp it to represent the limits whatever system you are simulating.

To put it in camera terms, the virtual iris is the camera's aperture (f-stop), and the exposure is the film speed (the faster the film, the more dynamic range it has).

To keep things simple, I guess the exposure should be constant like it is in real life (although I bet your eye's exposure changes as you get older), and the only thing that should be variable is the virtual iris.

davepermen
07-24-2002, 09:10 PM
well i had the problem that i always thought about the exposure adjustment, and for calculating the real exposure, you need highprecision bigrange colors, no real way around it..

Nakoruru
07-25-2002, 04:39 AM
There is an nVidia demo showing high dynamic range and exposure, so its not strictly required. But its probably not something you can do in anything other than a demo ^_^

I think the best I will be able to do without a floating point frame buffer is normalize the lights so that are unlikely to clamp. My light 'intensity' will not actually be intensity, but the perceived brightness, which means I would end up scaling the light intensity by the exposure per light instead of per pixel. The linear interpolation in the framebuffer between 'bright' and 'twice as bright' will be wrong because it should be logarithmic. But, I think I will have to just implement it to see if it looks bad.

davepermen
07-25-2002, 04:43 AM
you can't do it witout high range colors. eighter you have them, or you hack them. next hw does have them. the nvidia demo does hack them. but hacking something on hw that doesn't exist normally is only useful for demos (while there pretty cool)