Halo from environment map

I’ve just seen an xbox driving game, don’t know the title, just saw it running in a shop.
Anyway, it did something that turned my head.
The wheel trims (alloys) of the car seemed to be using a simple environment map (cube map, I think). Nothing clever about that. But, as the wheel rotated, the environment map reflection changed as normal, but when the part of the environment map representing the sun was rendered on the trim, it gave off a halo, and a flare! I mean, the reflection produced a flare…this looked absolutely brilliant.
My question: what would be the most efficient way of producing this effect?
I can think of several ways, but they all seem like they’d be slow - is there something in the xbox hardware to make this efficient?
Another thing, it seemed to be using vertex shaders to produce a really nice heat-haze over the whole image.

The game was Wreckless. Try gamespot.com for some more screenies.

AFAIK, all the halos and glows are done by rendering the scene to a texture/taking the framebuffer as a texture, subtracting a largish value to remove all but the bright stuff, then doing a few convolutions to blur everything into that star shape (not a sprite, it’s the result of non-uniform (term?) blurs), then adding it back to the framebuffer.

The great thing about this method is that it’s very general (e.g. environment maps work perfectly) and reasonably constant time. The problem is that it eats major fillrate, and requires either a second scene render to texture or a stall while you copy and convolve the framebuffer.

The DOF/heat haze is covered on the NVidia site somewhere, although I don’t know if they use the exact same method. I think they put a mip level in each texture unit and alpha blend between them, so any arbitrary blurring effects (haze) can be added on top as well as simple DOF.

Screenies: 1 , 2 , 3 .

Good god, those screenshots look fantastic!
It certainly didn’t appear to have any frame rate problems in the shop where I saw it.
Wow, I’m blown away!
Cheers for the info.

Xbox only needs to render 640x480 29.97 times per second, and has DDR memory as well as dual vertex processing pipes. You can get away with a lot under those circumstances :slight_smile:

50fps or 60fps are the frequencies it has to aim for.

Right, thanks to this forums search, I’ve found a link that explains how this flare effect is done (well, it explains how to do fast depth of field, but a neat halo effect is a side effect).
http://www.gamasutra.com/features/20010209/evans_01.htm

Funny, I can’t remember depth of field in black & white.

I started writing one from scratch last night, almost working when the small hours arrived. needs some tweaking.

The only annoying part is getting a good amount of blur occuring at high speed.

Anyone got any cool tips n tricks for fast blurring on gfx hardware?

Nutty

Well, I’ve been thinking about something like enabling automatic mipmap generation (SGIS_generate_mipmap) and dynamically update the texture. Then using GL_ext_texture_lod_bias to choose a mipmap level. The higher the level the more blur the image will have. Has to be tested if that works and looks ‘blurry’!

kon

Originally posted by knackered:
50fps or 60fps are the frequencies it has to aim for.

Not true, e.g. Halo runs at 30fps. I’m pretty sure Wreckless runs at 30 (and occasionally lower) although I haven’t got a copy around to confirm.

most consolegames run at 30fps (about 30fps depends on where you live (pal/ntsc))

and why this works?

because:
a tv screen is designed to show motion
a pc screen is designed to show static images sharp

I have to say 30hz on a console, doesn’t look anywhere near as nice as 60. You can tell the difference a mile off. Especially when you’re used to it running at 60.

It’s not the difference between the screens, it’s the fact that TV images capture blur when filming, CG generated images in PC’s and consoles dont, thus when they’re not updating fast enough, it looks jerky. And sharp non blurred 30hz is jerky.

Trust me, I’m working on a console game in NTSC, and seeing it go from 60hz to 30, it is a big difference in visual quality, and makes a noticable jarring on your eyes.

Nutty

knackered,

The Xbox doesn’t even expose a 59.94 Hz 240 line per field mode for NTSC, nor a 50 Hz 290 line per field mode for PAL. Thus, all the games run at 29.94/25 Hz, and fill an entire progressive frame at 480 lines.
The fill rate is actually more or less the same for the two cases, but the transform rate is doubled in the interlaced case.

30fps looks bad in my apps - maybe not as bad on a tv (probably because of the phosphor latency, mention by Nutty).
I don’t know much about console/tv things, but why do dreamcast games give you the option of selecting between 50hz and 60hz when they boot up? They run super smooth in general.
I love my dreamcast…

BTW, have any of you an opinion on those Wreckless screenshots?
Having seen it in motion, I really think it’s worth the fill-rate hit.

The reason you get offered 50 or 60, is because most games are written for NTSC 1st, so all their physics are hard-coded to run at 60hz. This is because, alot of console games are designed to run at a fixed speed, so the code isn’t always time adjusted. Plus not time adjusting all moving things, means less floating point maths involved.

Anyway, the game is written as it’s supposed to be played at 60hz, then they do a PAL version, and slow it down to 50. But they dont re-write all the movement values, so the whole game effectively runs 16% slower. As alot of PAL TV’s now support 60hz, they offer it as an option to play the game at the intended speed.

This causes quite a bit of anger with alot of ppl. In future all movement code will probably be time adjusted, as more and more console games are moving away from fixed update speed, and more like PC variable frame-rate.

P.S. About this post-process scene glare. I’ve knocked up a quick demo. It’s not fancy, and it’s not particularly efficient yet, but it kinda works.

There are 2 controls. O and P increase and decrease the intensity at which the upper-intensity-range image (The scene image, after a value has been subtracted from it) is rendered at, during the blur phase, and N and M increase or decrease the number of passes done in the blur loop.

Uses cubemapping, and glCopyTexSubImage2D, and only 1 texture unit, so it should run on a fair variety of things. To make the texture updating even simpler, I’m using a 512x512 window too.

Download it here; http://www.nutty.org/OpenGL/Test/glare.zip

I’ll add some onscreen indicators of the twiddle values soon.

Nutty

P.S. When I get it nice, I’ll upload the source code too.

[This message has been edited by Nutty (edited 04-10-2002).]

That’s a pretty nice little demo, Nutty.
One thing, how are you subtracting? Are you doing it in software? I say this because there seems to be a lot of stippling in there.
You could use the blend equation extension to subtract (blend subtract a screen sized quad), but this is only supported on geforce1 upwards…and probably a few other high end cards.

I am using the blend equation to subtract.

The stippling effect is a side effect of the blur.

Each blur pass, renders it addatively, in 4 positions.

X +and- Offset, Y
and
X, Y +and- Offset.

Offset was 2 pixels, so it creates a kind of star shape around each texel. Reducing offset to 1, removes this, but then requires twice as many blur passes.

There is an nvidia doc which shows how to sample all 8 texel neighbours, I’ll try and stick that in, to see if it makes it better.

Nutty

hello… congratulations for all!!!

Hey Nutty, I got an error on your demo:
“Error getting blend eq func”

Radeon8500
I think I’m using the win2k 6043 drivers.