Depth buffer writes?

I have an idea to make a Soft Corona effect. From reading another article some where, I figured out you could simply increase or decrease the effect, by how much the light source is visable. So what i wanted to do was simply this. I wanted to render a scene, then I wanted to render my light meshes (Lamps, whatever) and test to see how much of it passed the depth buffer tests. Thus telling me how much of the corona to show. Make sense??

So my question is, is there any way to do this short of rendering my scene. Saving the depth buffer in an array. Rendering my Lights, saving the depth buffer in another array, Then testing EVERY depth value against one another?? There has to be a faster way, expecialy since I would have to do this for EVERY light, and that would get very very slow very very fast.

Any suggestions?

NV_OCCLUSION_QUERY does exactly what you want. But obviously this will only work on NVIDIA cards. I really hope ATI comes up with a similar extension.

Here’s one: Read http://oss.sgi.com/projects/ogl-sample/registry/NV/occlusion_query.txt
(edit: Ha, perfect double post. )

[This message has been edited by Relic (edited 01-29-2003).]

Yeah, i was hoping to do it without extensions. I am attempting to make my engine as OPEN as possible. (But still have some nice stuff). I am currently developing this for platforms such as LapTops and such, seems like they have ALOT of power, ALOT of ram, but very little video cards. So I am trying to throw as much as possible at the Vid cards, but only what they can handle.

I am aiming for Multitexture extensions, Compiled vertex arrays extensions etc; But as for any of the Gforce series extensions, taht wont work. Any ideas behond the extension runs??

You could use readpixels. Maybe test 5 pixels of the light (1 in each corner, one in the centre), see what colour they are. You could test more pixels for more accuracy.

Originally posted by Adrian:
NV_OCCLUSION_QUERY does exactly what you want. But obviously this will only work on NVIDIA cards. I really hope ATI comes up with a similar extension.

The Radeon 9500/9700 supports that extension. . .

Ah yes so they do, excellent.

The 8500 supports it too.

Ok, the read pixels is a good idea, BUT, i dont know where the light will be on the screen at any given time, so how would i know which pixels to read??

You could calculate the light’s screen coordinates with something like this,

glGetDoublev(GL_PROJECTION_MATRIX, LightP);
glGetDoublev(GL_MODELVIEW_MATRIX, LightM);
glGetIntegerv(GL_VIEWPORT, viewport);
gluProject(LightX,LightY,LightZ,LightM,LightP,viewport,&winx,&winy,&winz);

The occlusion querry works or you could use raycasting intersections. I implemented this in a demo a while back. Remember, don’t need the full complexity of the geometry if ray casting, and if you’re rendering the light & reading back you need only read back a tiny portion of the framebuffer.