PDA

View Full Version : HL2īs High Dynamic Range Rendering



Jan
09-13-2003, 03:14 AM
Hi

Did you see Half-Life 2īs video about their HDR-Rendering?

Here it is: http://www.fileshack.com/file.x?fid=3619

I wonder: What is soo special about it? I can do that effect pretty well on a Geforce 4. And i am sure this can even on a Geforce 1 or 2 be achieved. However i need an extra rendering pass for it.

I read in some article, that HL2 uses some real cool, absolutely new technique (HDR rendering) and that this would be only possible on the latest hardware (DX9).
So i wonder, if it might be possible - on the latest hardware - to do this effect without an extra pass? What is so special about what HL2 does? Somewhere i also read, that float-textures/buffers would be necessary for this. Is that true?

Note: The video shows only at the beginning the "glow-effect". The rest of the video shows merely bumpmapped specular lighting (and/or bumpmapped environmental mapping).

Jan.

Nutty
09-13-2003, 04:32 AM
The nice bit is, when you're inside the outside glows, but when you go outside, the overall light intensity is alot higher,and the sky doesn't glow.

I'm not sure how this could be accomplished, other than via interaction from the cpu, basically telling the blur control to activate, because we're in a dark environment.

To answer your question, I saw nothing that could not be done on a gf4. DX8 hardware has the ability for 16bit HDR textures. Not float textures by any means, but significantly increased resolution over standard 8bit textures.

Given Halo did the blur thing over trees years ago on DX8 tech, I really dont know what HL2 requires soo much PS2.0 for.

Water reflections/refractions seem to increase in quality from DX8.0, to DX8.1, to DX9.0, but theres no huge significant leap in difference.

The shadows look particularly poor. The wooden boards on the roof look like they're floating, and the character ones looked quite jaggy in areas.

jwatte
09-13-2003, 06:51 AM
Nutty, you can easily accomplish that difference between indoor and outdoor by switching tone maps (pre-blur), and only blurring parts that are > 1.0 in intensity (post-tone-map).

What would be REALLY cool would be getting pixel value statistics out of the renderer and turning it back into the tone map selection process (a k a "auto exposure") but because that can't be done right now, you can have artists tag areas with preferred exposure settings and interpolate between them or something.

davepermen
09-13-2003, 07:50 AM
the real cool thing is how easy and generic you can solve it, without having to mess with anything complicated, wich you would to get it working on a gf4. on a r300, you have this automatically by default actually, only the exposure and glow you have to do yourself. the hdr is implicit.

GPSnoopy
09-13-2003, 07:54 AM
I agree with Nutty.
I think they went all Pixel Shader 2.0 because it's "cool" to use these, but that most of the time it's completly uncalled for.

From the screenshots I've seen I can't tell the difference either. It just seems that the DX9 screenshot is a bit brighter than the others.
We'll have to see the final game to judge whether or not using every bit of DX9 was really required.

Nutty
09-13-2003, 08:06 AM
It seems from Derek Perez's comments, that is what nvidia feels also. He basically says that parts that dont need to be DX9 will be reduced to improve performance on the GF-FX range.

Yeah the pixel value statistics would be nice.. I suppose you could render the scene to a small pbuffer, and do a read pixels (Mmmmn.. async read pixels), and actually accumulate the average intensity of the pixel data. On say a 32x32 window this wouldn't be too slow.

Thats actually tempting to just try myself! http://www.opengl.org/discussion_boards/ubb/smile.gif

knackered
09-13-2003, 08:49 AM
Why the hell aren't fileshack using bittorrent? Then all a big queue would mean would be a faster download.
Oh, because they make money out of selling the 'premium' download service, of course.

roffe
09-13-2003, 09:02 AM
Originally posted by Nutty:
I suppose you could render the scene to a small pbuffer, and do a read pixels (Mmmmn.. async read pixels), and actually accumulate the average intensity of the pixel data.

You can accumulate and average without doing a readpixels call by using the Summed Area Table technique presented at gdc03 by nvidia.
http://www.opengl.org/developers/code/gdc2003/GDC03_SummedAreaTables.ppt

The trick is to render to an active texture, by slicing your texture into rows/columns and rendering line primitives. However, since each primitive call depends on the previous one, you basically have to do a finish after each. If this turns out to be faster than the entire readpixels call + cpu accumulation is difficult to say. It would be interesting to see a chart or something where the cutoff might be, taking gpu/cpu/texture size/etc into account.

[This message has been edited by roffe (edited 09-13-2003).]

SirKnight
09-13-2003, 09:19 AM
To me it looked like everything was TOO shiny, like it was wet or something. Maybe that is the effect they wanted, the everything is wet b/c it just rained look on that part of the game, I dunno. It did look cool but it also looked a little weird to me.

-SirKnight

SirKnight
09-13-2003, 09:22 AM
Nutty, you can easily accomplish that difference between indoor and outdoor by switching tone maps (pre-blur), and only blurring parts that are > 1.0 in intensity (post-tone-map).


Know of any places off hand that talk about this in more detail? I've been planning to do this kind of effect for a while, just never gotten around to it. Thanks.

-SirKnight

Nutty
09-13-2003, 09:34 AM
Whats wrong with talking about it here? http://www.opengl.org/discussion_boards/ubb/smile.gif

Pop N Fresh
09-13-2003, 10:11 AM
High-dynamic range rendering is first obvious use for floating point fragment shading. You can now do all your lighting in a proper linear space instead of the implicit gamma space currently used. Since your basic operators -- add, mult, etc.. -- don't give proper results when used in gamma space this gives you an overall improvement in visual quality.

It just requires an extra post-process to map your floating-point results to your monitor.

SirKnight
09-13-2003, 11:26 AM
Originally posted by Nutty:
Whats wrong with talking about it here? http://www.opengl.org/discussion_boards/ubb/smile.gif

Well nothing but if there was somewhere that already talked a lot about it, i'd like to know and read that. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

Jan
09-13-2003, 11:27 AM
Hm, when i did some stuff with specular lighting, etc. i would have liked to have more precise textures. But then they were to be used more in a mathematical way.

Do float textures and float buffers (for color values, not something else) really have so much impact? I really donīt see why 8 bit per channel shouldnīt be enough?
Or are there some other tricks, which can then be applied?
I would be grateful, if someone could give me a bit more background-information about this.

About the wet look: I think that everything looks really cool and realistic, EXCEPT for the shingles (? or tiles?). The stuff the monster is standing on.
Those look really unrealistic. Usually that stuff is not very shiny. And even if it was wet, it should look different (although i donīt know how, i have to wait for the next rain http://www.opengl.org/discussion_boards/ubb/wink.gif )

Bye,
Jan.

zeckensack
09-13-2003, 11:48 AM
Originally posted by Pop N Fresh:
High-dynamic range rendering is first obvious use for floating point fragment shading. You can now do all your lighting in a proper linear space instead of the implicit gamma space currently used. Since your basic operators -- add, mult, etc.. -- don't give proper results when used in gamma space this gives you an overall improvement in visual quality.

It just requires an extra post-process to map your floating-point results to your monitor.
Now what are you talking about?
The post-processing you refer to is already commonplace, it's the gamma LUT in the RAMDAC and has been programmable for ages. All you need is an appropriate gamma ramp to make linear operations in pre-gamma (ie shading) produce linear luminance changes.
Proper monitor calibration is the keyword here, it has nothing to do with your shaders, nor even your rendering system.

SirKnight
09-13-2003, 12:06 PM
About the wet look: I think that everything looks really cool and realistic, EXCEPT for the shingles (? or tiles?). The stuff the monster is standing on.
Those look really unrealistic. Usually that stuff is not very shiny. And even if it was wet, it should look different (although i donīt know how, i have to wait for the next rain )


Yeah the roof that the monster was standing on is the part I thought just didn't look right. It was WAY to shiny, that's why to me it sort of looked wet, not exactly but kind of. http://www.opengl.org/discussion_boards/ubb/smile.gif The other shiny things looked ok though. The part where the player was inside that wooden building thing to me was the best part of the whole demo.

-SirKnight

SirKnight
09-13-2003, 12:10 PM
I just watched the demo again and the monster looks too shiny to me too. http://www.opengl.org/discussion_boards/ubb/smile.gif It has that kind of wet look also. City 17 must have just gotten some rain or something. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

-SirKnight

Pop N Fresh
09-13-2003, 03:00 PM
Originally posted by zeckensack:
Now what are you talking about?
The post-processing you refer to is already commonplace, it's the gamma LUT in the RAMDAC and has been programmable for ages. All you need is an appropriate gamma ramp to make linear operations in pre-gamma (ie shading) produce linear luminance changes.
Proper monitor calibration is the keyword here, it has nothing to do with your shaders, nor even your rendering system.

Ideally you want a linear relation in your lighting system. A light with a value of 2 should give off twice the energy as a light with a value of 1. When everything must be squeezed and sampled into an integer range of [0..255] this isn't possible.

Let's say you made the strength of a nightlight 1 and the noontime sun 255 to cover your entire range. In this system 255 nightlights are going to be as bright as the noontime sun which is obviously incorrect. Once you move to floating point you can correctly model the relative brightness of something like a nightlight and the sun.

This isn't to say that there aren't various techniques to get around this problem. But using a full High-Dynamic Range rendering system simply makes the problem go away.

The post-process I was talking about was assuming you would want to render to a floating point target so you could do exposure control, blends, bloom effects, etc. without a loss of precision. Since current hardware cannot display a floating point buffer directly this would need to be mapped to something it can handle for display.

[This message has been edited by Pop N Fresh (edited 09-13-2003).]

roffe
09-13-2003, 05:52 PM
Originally posted by Pop N Fresh:
But using a full High-Dynamic Range rendering system simply makes the problem go away.

Almost, now everyone just needs to go buy a hdr display seen at SIGGRAPH for a complete system! http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.sunnybrooktech.com/tech/index.html

vember
09-14-2003, 12:07 AM
Originally posted by Pop N Fresh:
Ideally you want a linear relation in your lighting system. A light with a value of 2 should give off twice the energy as a light with a value of 1. When everything must be squeezed and sampled into an integer range of [0..255] this isn't possible.


exactly. without backbuffers supporting more than 8-bits of precision(or 10 as in 10-10-10-2, the gamma LUT is not really suited for anything other than converting from regular windows gamma (2.2) to whatever your monitor requires.

Average joe probably think that gamma correction is about brightness/contrast and not about lienarity as this is often the way it is exposed in games.

zeckensack
09-14-2003, 03:23 AM
Pop N Fresh,
okay, that was a misunderstanding then. I thought your point was correcting some non-linear-ness of lighting.
HDRL should have an adaptive range but is still linear. That was my point.

soconne
09-14-2003, 11:23 AM
MAN!!, I wish I knew as much as you guys :-)

knackered
09-14-2003, 12:54 PM
What? You've never heard of bittorrent?

AdrianD
09-14-2003, 02:56 PM
if you don't understand why HRD rendering is needed to archieve good lighting results, read this first:
http://freespace.virgin.net/hugo.elias/graphics/x_posure.htm

and if you want more theoretical input about light:
http://freespace.virgin.net/hugo.elias/graphics/x_physic.htm

OldMan
09-16-2003, 03:15 AM
Really nice article. It passed the idea in a simple way. But I still don't understand how can it be implemented with all this floating point stuff. Is it just a mather of more precision between lower and higher value for bright? Or the idea of an exponencial function would be used too?

If not.. can anyone give a link to a paper or an article that explains how could it be implemented in real-time graphics (not raytracing stuff, that I already can figure it out how would be done)

davepermen
09-16-2003, 03:43 AM
with fixedpoint you normally have a range from 0..1 or -1..1 or on some you have up to -8..8.. still, its a very small range.

floats can store ranges from several billionzillions, but with very high detail. we all know that we use floats in general use because of this.

so with floats you can store any value, and this is exactly what we need for hdr lighting.. you can store sunshine at 1000000.f and the small candlelight at 0.01f, and do the same math in both cases and you get the correct result in the end.

thats why floats are so useful. sure, 32bit fixedpoints would rock, too, and if i'd be able to use 128bit fixedpoints i would not use floats anymore http://www.opengl.org/discussion_boards/ubb/biggrin.gif but they are a handy middle-tool. they can express a huge range (a "high dynamic range" http://www.opengl.org/discussion_boards/ubb/biggrin.gif), and have quite a lot of detail in the small numbers..


how it could be done? with ARB_fp.. by just doing it. store in some constants there the light values, and do your perpixellighting. HDR is actually rather AUTOMATIC if you use ARB_fp. all you have to do is store lightvalues in a high dynamic range instead of normally colours from 0..1 and done.

what you need to do in the end, to "compress" it onto screen, is to use some sort of exposure function.

OldMan
09-16-2003, 04:07 AM
Ok, thanks So i was correct in my idea that is not a float aspect. Just a matter of my word is bigger than yours ....

But that would have good results only if using the shaders for ALL stuff in the rendering. I don't think hardware (except for really high end stuff) can handle this (I may be wrong ).

Anyway it could be useful to have a feature that enables this kind of stuff automatically.

davepermen
09-16-2003, 05:37 AM
Originally posted by OldMan:
Ok, thanks So i was correct in my idea that is not a float aspect. Just a matter of my word is bigger than yours ....

But that would have good results only if using the shaders for ALL stuff in the rendering. I don't think hardware (except for really high end stuff) can handle this (I may be wrong ).

Anyway it could be useful to have a feature that enables this kind of stuff automatically.

if you have a todays high end card, like any radeon9500 or bether, you can do this, and it is rather automatic supported. as i said. if you use ARB_fp, you get floatingpoint in the pixelshaders, and as you have that, you can directly use lighting values in a high dynamic range instead of the 0..1 range wich is used by default. entierly doable and entierly FAST doable on todays hw.

all gfFX support that, too. they just have some problems with floatingpoint texture support, and are quite slow at doing floatingpoint ARB_fragment_programs compared to the radeons.

DrTypo
09-16-2003, 06:46 AM
Nice discussion.
But in the end, isn't all this HDR stuff about using a logarithm instead of using directly the brightness value?

--
DrTypo

Nutty
09-16-2003, 07:27 AM
so with floats you can store any value, and this is exactly what we need for hdr lighting.. you can store sunshine at 1000000.f and the small candlelight at 0.01f, and do the same math in both cases and you get the correct result in the end.

I still dont get it, there must be more to it. If you basically do diffuse on surface, you end up just having huge color values, which then just equates 1,1,1, when its clamps to the framebuffer. That right?

If its linear, I dont understand why 8bits cant represent it. Just a normalization of the HDR values. Sure its alot less resolutions of lighting values, but if you only have 1 light in the scene what difference does it make if its 10000,10000,10000, or 255,255,255 ? The end result is still clamped to 8bits per channel anyway.

harsman
09-16-2003, 07:40 AM
Clamping is a very bad way to approximate how the eye and/or a camera works. It looks crap to be honest. In the real world the sun can easily be 1 million times brighter than the light in a small room on a cloudy day. To represent that kind of dynamic range you really need floating point.

Nutty
09-16-2003, 08:14 AM
I dont think you read what I wrote at all.

I know floating point offers greater dynamic range. But regardless of what format you use, it will still get clamped to 8bits per channel in the framebuffer..

Or does it just make internal precision alot better, when accumulating light intensities?

zeckensack
09-16-2003, 08:31 AM
Nutty,
the idea is to scale the accumulated lights back to some range that fits, aka exposure. This scaling should be dynamic, based on the average intensity of a scene, and is akin to the way real cameras work, and also how your eyes adjust to environments.

The portions of a scene that still exceed 1 after downscaling are then essentially 'blindingly bright', like looking directly at the sun in an otherwise pretty dark ambience (overcast weather or something like that), or highlights on shiny surfaces where the actual light source is not visible.

This stuff can then be decorated with glow effects to simulate the loss of vision around blinding light sources.
You can also make the top of the range non-linear to emulate saturation and make blind spots stay 'on the eyes' for a little while.

vember
09-16-2003, 08:31 AM
yes, it will still be clamped in the framebuffer if you just write a value to it.

The real use for HDR is while doing the calculation. Lets consider motion blur as it is easy to explain.

Example:
You have a pixel with the value 8 that is motionblurred in the x-axis by 4-pixels.

the correct output of that pixel would be [2 2 2 2] since it still has the same energy. But if this pixel has been clamped (to 0..1) by the buffer before the blurring, the result would be [0.25 0.25 0.25 0.25], just leaving a greyish mess.

Now, this is not an especially good explanation, but it might give you an idea of what the problem is.

davepermen
09-16-2003, 09:58 AM
nutty, read about exposure. link is above given. you don't need to CLAMP to 0..1 in the end. you just need to range-compress in some way. exposure is much bether suited. and you can add glow to the overbright parts. and if you do motionblur, it looks much nicer if you have brighter than white ones (as well as depthoffield and all other blurs), because of the accumulation then.

you still don't get it? get a radeon and watch the demos http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Nutty
09-16-2003, 10:53 AM
Nutty,
the idea is to scale the accumulated lights back to some range that fits, aka exposure. This scaling should be dynamic, based on the average intensity of a scene, and is akin to the way real cameras work, and also how your eyes adjust to environments.

Right, thats what I wanted to know Thanks.


you don't need to CLAMP to 0..1 in the end.

Dave, you cant get out of the clamping, thats what I'm saying. Until we have floating point frame-buffers, and floating point ram-dacs, it will _ALWAYS_ get clamped. And that aint gonna change for a good few years.

So basically when you said,


how it could be done? with ARB_fp.. by just doing it. store in some constants there the light values, and do your perpixellighting. HDR is actually rather AUTOMATIC if you use ARB_fp. all you have to do is store lightvalues in a high dynamic range instead of normally colours from 0..1 and done.

It isn't just done, there is more todo, such as scaling back to a given exposure, which is not automatic.

hmmm.. need to try it now.. but I have no dx9 card.. bah.. http://www.opengl.org/discussion_boards/ubb/smile.gif

davepermen
09-16-2003, 07:08 PM
nutty, there are more ways than clamping to compress -inf .. +inf to 0..1. one would be repeat http://www.opengl.org/discussion_boards/ubb/biggrin.gif would be rather stupid, but at least a fun effect http://www.opengl.org/discussion_boards/ubb/biggrin.gif after exposure, you don't need any clamping. after exposure, your values are in the range 0..1 if they where between 0..+inf before (wich they normally are in real lighting environments http://www.opengl.org/discussion_boards/ubb/biggrin.gif)

exposure IS a way without clamping.

and there is NO need for a HDR screen. we can watch movies on tv and they look real to us. and THIS is what our current target in gamedev is. "cinematic gaming, cine-fx http://www.opengl.org/discussion_boards/ubb/biggrin.gif". now to do this, we have to simulate first: a correct lighting environment (impossible, but hdr is a BIG step in the right direction), and second, a correct recording camera. again, impossible, but with exposure, glow, depthoffield, and some other effects (dirty lense, water on lense, etc), we get rather near there, too.

this camera IS an important part, and a camera is by default NOT hdr-capable. so that is no problem.

OldMan
09-17-2003, 02:00 AM
But remeber that we do NOT see the world in a camera. We see it with our own eyes (ate least I do like that). Lens flares and etc.. are NOT realistic effects in MOST situations, unless you are developing a game where everyone uses glasses http://www.opengl.org/discussion_boards/ubb/smile.gif We need to aim for eyes.. not cameras.

With the current speed of development we will need floating point framebuffer and even more advanced display devices in just a very few years.

harsman
09-17-2003, 02:11 AM
So I was finally able to download the HL2 HDR video from the straining servers. Is it just me, or does it look kind of crap? They have this really low contrast low resolution, non HDR static lighting, no self shadowing on the ant lion, and then extremely aliased huge ass specular highlights from a clouded sky? Is it supposed to look like stuff is wet or what? That would need reflections and much darker materials. Honestly, that looked really bad.

Mazy
09-17-2003, 02:25 AM
Now were getting at the question "why so much specular", and the fact that most people thinks that everything looks like plastic..

I do agree that much looks like plastic, or wet, but thats not because of the specular, but the lack of other global lightings.. in fact, a sunny day, most stuff has much more specular on them than we should guess http://www.opengl.org/discussion_boards/ubb/smile.gif ofcourse you need the right angle between you, the sun and the object, but not many of my friends even believed that nearly everything from treeleafs to asphalt to different mortar actually have quite a powerful specular on them, not to mention steal and everything thats painted. The lack of shadow i can agree on, but i dont think the specular was that overused.

davepermen
09-17-2003, 02:57 AM
Originally posted by OldMan:
But remeber that we do NOT see the world in a camera. We see it with our own eyes (ate least I do like that). Lens flares and etc.. are NOT realistic effects in MOST situations, unless you are developing a game where everyone uses glasses http://www.opengl.org/discussion_boards/ubb/smile.gif We need to aim for eyes.. not cameras.

With the current speed of development we will need floating point framebuffer and even more advanced display devices in just a very few years.

depends on the game. for hl2, yes, but for for-example zelda, or other 3rd-person games, no. there, modelling a camera is adequate, and imho bether. so it really looks like its a movie, as those get captured by cameras, too..

best is metroid prime imho. it is the view trough the eye, but she has glasses in front of it. funny raindrops with refraction, dirt, cold gas, etc. all affecting the glass for cool effects. beautiful made.

well.. as i said above, depends on the game. i like camera-recorded games. my eye will add the "i see it trough the eye"-effects anyways.

harsman
09-17-2003, 03:36 AM
Mazy, I actually agree with you. It wasn't so much the specularity but the lack of reflections (occur on all highly specular surfaces, specular is just a hack to replace hdr reflections anyway) and the inconsistency of the lighting. It looked like HDR specular tacked on, not like high quality realistic lighting.

Mazy
09-17-2003, 03:50 AM
Could be that the light ( and then non coherent with the look of the sky ) was much brighter in the HDR than the rest.. therefore only the light has the intensity to bounce off again, and we only see the light in the reflection angle.. but your right, with that diffuse sky it should have less intensity difference between the sun and the rest of the sky and thus if you have that much specular you should be able so se more of the reflected sky.

But i still think its pretty ok.. And the facetted sphere looked like it suffered from the same problem that i have, that floatingpoint textures doesnt have bilienar filtering, but on bumpmapped surfaces it was less noticeable so i think i can continue without that for now on http://www.opengl.org/discussion_boards/ubb/smile.gif

OldMan
09-17-2003, 04:30 AM
It's just a matter of adjustments. But anyway it must receive a few corrections, the specular is making all look like if there was a layer of glass over all materials.


In fact I ever thinked that specular stuff is too much overapplied in modern CG examples and products. I would prefer to have less specular than reality than more specular than reality.

Nutty
09-17-2003, 10:14 AM
no self shadowing on the ant lion,

Hmm didn't notice that. Thats quite bad then. Are the shadows just character projections onto the world? I thought they were full scene depth map shadows.

zeckensack
09-17-2003, 10:42 AM
Originally posted by OldMan:
But remeber that we do NOT see the world in a camera. We see it with our own eyes (ate least I do like that). Lens flares and etc.. are NOT realistic effects in MOST situations, unless you are developing a game where everyone uses glasses http://www.opengl.org/discussion_boards/ubb/smile.gif We need to aim for eyes.. not cameras.

With the current speed of development we will need floating point framebuffer and even more advanced display devices in just a very few years.I second that http://www.opengl.org/discussion_boards/ubb/smile.gif
I always thought lens flares were are complete waste of time.

Jan
09-17-2003, 11:05 AM
Well, the _real_ lens-flare effect is usually quite annoying, i think.
However, if you change the code only a bit, you can change it into a bright blinding glow. This usually looks good, it is natural (because looking into the sun is usually unpleasent) and in some games like CS it would add a strategic component, because having the sun on your back can be a disadvantage for your opponent.

Jan.

Nutty
09-17-2003, 11:37 AM
There was a point in time where publishers were lens-flare obsessed. When they reviewed games number 1 priority was "Whats the lens flare like?"

OldMan
09-17-2003, 12:45 PM
Maybe they all use glasses.. big, tick ancient glasses. And meybe some of them use eyefish glasses...

davepermen
09-18-2003, 05:57 AM
you're all playing too much stupid braindead firstpersonshooters. i dislike first person actually quite a bit. i prefer cineastic cameras and all. zelda64 was one of the best games in that design, haven't played the new one yet (cartoony, yeah http://www.opengl.org/discussion_boards/ubb/biggrin.gif).

for such games, camera-effects add, they make it more cineastic, and definitely look good.

yeah, there was once a f1 game with a lensflare. you could laugh or cry about it. circle-round colourful rings around a sun. and that was it. same blue sky, no enlightment, or anything. f1 world champion ship on n64 showed it bether years ago.. driving in direction of the sun, and the fresh-built street reflected the whole sun, and the sky got white of brightness, and a huge lensflare (not a circleperfect one btw http://www.opengl.org/discussion_boards/ubb/biggrin.gif) filled the screen. result was you got nearly blinded for a moment. it looked VERY natural and good.
and it gave a strategical point, yes, too.. it isn't that easy to drive blind http://www.opengl.org/discussion_boards/ubb/biggrin.gif

tfpsly
09-18-2003, 07:14 AM
Originally posted by Nutty:
Hmm didn't notice that. Thats quite bad then. Are the shadows just character projections onto the world? I thought they were full scene depth map shadows.

Well in the previous videos it actually looked much worse - haven't checked if that was improved:

in the first fight through the city, where you moving in team with some bots, you can see that the lighting of the characters and of the buildings is not done in the same way. When going through a heavily damaged house with holes in the wall, the characters are either fully in light or fully shadowed; the buildings do not project shadows on the characters.

Nutty
09-18-2003, 09:39 AM
Yeah I pretty much figured the world was just using static light maps, or shadow maps as the case may be.

I remember one of the videos showing the bump-mapping feature in HL2, but I saw a website showing the poor quality of dynamic lights in HL2, and they looked like per vertex dynamic lights, so I haven't seen any use of the bump-mapping in the videos. Unless its just used for ambient bump-mapping.

Korval
09-19-2003, 11:07 AM
Since current hardware cannot display a floating point buffer directly this would need to be mapped to something it can handle for display.

This is the question that I haven't seen the answer to yet. I understand how to make computations using HDR. But, how do you effectively turn an HDR buffer into one that can be displayed?

Do you just find the minimum and maximum intensities and use them to convert it to the [0, 1] range in a linear fashion (ie, subtract from all colors the lowest and divide by the difference between the lowest and highest)? Or do you convert it to [0, 1] via some other means?

roffe
09-19-2003, 11:39 AM
Originally posted by Korval:
This is the question that I haven't seen the answer to yet. I understand how to make computations using HDR. But, how do you effectively turn an HDR buffer into one that can be displayed?


The answer is called tone mapping, which should be familiar to anyone how has developed their own film.



Do you just find the minimum and maximum intensities and use them to convert it to the [0, 1] range in a linear fashion (ie, subtract from all colors the lowest and divide by the difference between the lowest and highest)? Or do you convert it to [0, 1] via some other means?

You have just explained one way of doing it.There are many others. The obvious problem with a linear scale is that a light source visible in the scene would be thousand(or whatever scale you use) times brighter than any surface. Since most people still use monitors/displays with a limited 8-bit per channel output, the lightsource would be the only thing visible. I recommend this dissertation (http://www.cg.tuwien.ac.at/research/theses/matkovic/diss.html) which has a good summary of previous works.

Korval
09-19-2003, 12:26 PM
Excellent paper (haven't finished. Will finish later). I think the "manual" setting of the aperture is the right way to go, so that if the sun is sitting on the horizon, you can darken the scene while still allowing the sun to over-brighten significantly.

Granted, rather than using one of his computations, I'll likely set the value myself given what I can determine about the scene from the CPU (is the sun visible, different sectors may have different aperture values, etc). After all, I may want to simulate the effects of going into a sunny day when one was just inside a dark cave (massive overbrightening that shrinks down over time), or the reverse (lots of darkness that slowly resolves itself).

BTW. Given that float buffers do not (yet) offer blending operations, how do you deal with multi-pass operations and HDR? Is it economical to swap float buffers and pull the old value from the buffer you just rendered to (especially considering that binding a new buffer as a render target likely induces a significant stall in the pipeline)?

BBTW. Before I start playing around with this, is there a program or a web site or something that I can use to test/set my monitor's gamma? I want to make sure everything's linear before going into HDR.

[This message has been edited by Korval (edited 09-19-2003).]

Nakoruru
09-19-2003, 12:48 PM
Anyone care to explain exactly why we need a floating point frame buffer?

I see no need for the frame buffer that is scanned out to the monitor by the RAMDAC to be anything but fixed point.

Even if we had front and back buffers that were floating point you would still need an 'exposure shader' to convert the floating point values something between 0 and 1 for the RAMDAC. There is no need for such a specialized programmable stage. Just use a fragment shader written as an 'exposure shader' to copy a floating point pbuffer to a display buffer.

All that is really missing is blending on float buffers.

roffe
09-19-2003, 12:56 PM
Originally posted by Korval:
BTW. Given that float buffers do not (yet) offer blending operations, how do you deal with multi-pass operations and HDR?

I use the "front/back pbuffer ping-pong" technique if you want to call it that. It works, but I can't say more than that. Might be too slow for interactive rates.

pkaler
09-19-2003, 01:38 PM
Originally posted by Korval:

BBTW. Before I start playing around with this, is there a program or a web site or something that I can use to test/set my monitor's gamma? I want to make sure everything's linear before going into HDR.

http://www.cbloom.com/3d/gamma.html

Korval
09-19-2003, 08:15 PM
I don't think the instructions on that link works correctly. When I set my desktop to the proper gamma value (for me, around 2.1), everything gets way too bright. Is it something that should only be set for full-screen rendering modes, or should it be set for the desktop too.

chrispy
09-20-2003, 12:30 AM
It seems that blending is only supported in those formats that a frame buffer can have?

Why is this?
Will be next gen. hardware have floating point blending?

vember
09-20-2003, 02:02 AM
You should only use the desktop gamma thingy to correct it to the standard windows gamma(2.2). A monitor is usually in the 2.0-2.5 range, so it should only be a small adjustment. My monitor has a gamma of 2.5 so use a gamma correction of 2.5/2.2=1.14 .

If you want to do display a linear light space, convert it to gamma space with "out = pow(in,1/2.2)"

Korval
09-20-2003, 08:40 AM
Why is this?

Because floating point operations cost transistors.


Will be next gen. hardware have floating point blending?

Probably. But I wouldn't count on it being fast (not that rendering to a float buffer is particularly fast anyway).


My monitor has a gamma of 2.5 so use a gamma correction of 2.5/2.2=1.14 .

My gamma seems to be 2.1, which means I set my desktop gamma to 0.95 (2.1/2.2). However, for full-screen rendering, do I set the gamma to 2.1 or 0.95?

krychek
09-20-2003, 09:39 AM
Is it really necessary to find out the exact maximum, minimum color intensity of the scene? Wouldn't it be enough to calculate an estimate of max/min using the material properties of visble objects and the lights which affect them?

vember
09-20-2003, 09:53 AM
Set it to 0.95, or let the user do it him/herself with the gfxcard drivers. Without floating point frame-buffers the precision in linear space (gamma 1.0) would be really bad so it's better to let the framebuffer remain in gamma-space and do the conversion from linear-space to gamma-space yourself (either in software or using shaders on the gpu).

I'm probably stating the obvious but textures and colours are almost always in gamma space already so you have to convert them to linear space if you want to do stuff with them in that space. And it's not efficient to store textures in linear-space either as 8-bit per channel in linear space doesn't cut it.

Doing calculations in linear space if far from free, because of all the conversions. But in some cases it really pays off. Antialiasing is one thing I wouldn't want to do without taking gamma into account (in software of course, hardware has to handle this by itself). The result is way smoother.

Korval
09-20-2003, 04:06 PM
Is it really necessary to find out the exact maximum, minimum color intensity of the scene? Wouldn't it be enough to calculate an estimate of max/min using the material properties of visble objects and the lights which affect them?

It doesn't need to be precise. Indeed, it's probably a good idea to allow for a little over-brightening (when appropriate, like looking into the sun).


so it's better to let the framebuffer remain in gamma-space and do the conversion from linear-space to gamma-space yourself (either in software or using shaders on the gpu).

It's bad enough to waste 2 cycles per fragment manually converting a floating-point value to a [0-1] range. Now, you're suggesting that I waste more cycles doing the gamma correction, when the hardware itself already has gamma correction (and, in the case of ATi's multisampling, it expects that you're working in linear space)? At some point, you have to cut your losses on an effect. Also, blending is not really possible in gamma-space; at least, not without a gamma-space blending function. So what's the point of having the framebuffer in gamma-space?


I'm probably stating the obvious but textures and colours are almost always in gamma space already so you have to convert them to linear space if you want to do stuff with them in that space.

How did the image texture or colors get into gamma space? If I pick a color out of a color selector (with the correct gamma value set for the monitor), the RGB values for that color should be linear. Isn't that the whole point of the gamma-correction?

[This message has been edited by Korval (edited 09-20-2003).]

vember
09-21-2003, 01:51 AM
Originally posted by Korval:
How did the image texture or colors get into gamma space? If I pick a color out of a color selector (with the correct gamma value set for the monitor), the RGB values for that color should be linear. Isn't that the whole point of the gamma-correction?

[This message has been edited by Korval (edited 09-20-2003).]

The whole point is that calculations are supposed to be done in a linear space otherwise every step of the calculation accumulate errors. You still want colours and textures to look the same.

In 90% of all software, when you create an image to use as a texture you do it in gamma space and thus must convert it to linear space, do your calculations and then convert the final image back. If you don't do the gamma->linear conversion you will get the same bland greyed-out look as when you gamma-corected your desktop with the 2.1 setting.

And considering the performace issues, yes it's very expensive. The current generation hardware can do it, but it cannot do without serious penalty.

Ideally we would like a conversion to linear space at the texture-samplers (before filtering) and higher precision framebuffers so the built in gamma conversion could be used (as the conversion 1.0->CRT and not 2.2->CRT as today) without running into stepping issues. Because of the non-linearness of gamma-space, which is very close to our eyes sensitivity, it becomes a very efficient method for compression. But without it, 8-bits per channel is gonna suffer from stepping at the darker areas of the image.

Thinking of it, it could probably work fine with a 10-bit framebuffer. I haven't tried it yet though. 10-10-10-2 would probably be good enough for your textures that doesnt use the alpha-channel.



How did the image texture or colors get into gamma space? If I pick a color out of a color selector (with the correct gamma value set for the monitor), the RGB values for that color should be linear. Isn't that the whole point of the gamma-correction?


Because you authored them in that space. If you created them in linear space, you shouldn't do the conversion.

[This message has been edited by vember (edited 09-21-2003).]