Effect from Game Developer magazine...

I have a question about a small picture in the March 2003 Game Developer magazine.

Inbetween pages 16 and 17 is a microsoft ad for directx and GDC2003. Anyway, on this ad is a picture of 5 metaballs that they say are rendered with only 10 triangles.

I’m assuming that they are using cube-mapped depth sprites with a per-pixel normalmap in that picture, but does anyone have any thoughts on how they got the transition from one sphere to another to be smooth like that? Normally, depth sprites would have a sharp intersection like regular geometric spheres, no?

– Zeno

Is it possible to see at least one online picture of that ??
Just to figure out how it looks…
I can’t receive game developer magazine

I’d guess that they do it all in a fragment program, then you can renormalize per pixel and get correct depth. Why they’d need ten triangles in that case, I don’t know.

Rio - I’ll scan a picture of it tonight. I thought that there might be several people here who had the magazine.

Harsman - ten triangles to render the five sprites for the blobs. What does renormalization have to do with depth?

I’m not really sure how depth sprites work, but it occured to me last night that if the sprites were all in the same plane and you averaged their depth values, it might produce that smoothing effect

– Zeno

[This message has been edited by Zeno (edited 02-28-2003).]

I’d guess that each sprite accumulates some kind of gradient/field and the results are accumulated then normals generated etc entirely inside the fragment program. So the thresholding etc on the accumulated field is done in a fragment shader after rasterization of the field information using sprites.

The image itself looks kinda 2D. I wonder if it can represent 3D metaballs with sharp edged self occlusion (I doubt it).

Ok, here’s a link to the image: http://www.sciencemeetsart.com/wade/temp/metaballs.jpg .

I’d be interested to hear some more ideas about how they got this smoothing to work.

– Zeno

I meant what dorbie said essentially. Render the sprites to a float buffer (you only need two components really, but it doesn’t matter) And store the depth which can be computed by summing the contributions of the individual “forcefields” of the metaballs and cutting at some threshold (standard metaballs). At the same time compute the normals, they’re easy to do analytically, and output these as well but to another component/channel. Then render to standard backbuffer by reading depth and normals from the float buffer using a fancy shiny envmap shader. This doesn’t sound very complex, maybe I’m missing something?

there is stuff like Game Programming Magazine???Well here in Bulgaria i can only dream about it!

Ahh, I didn’t understand what Dorbie was saying at first, but I think I get the idea from the combination of your two posts.

Quick question, though - can you add values from multiple primitives into a float buffer? Wouldn’t that require blending, which I thought wasn’t supported on current gen hardware? Or are there floating point accumulation buffers?

Thanks for the ideas

Framebuffer blending isn’t but fragment programs are allowed. If you render to a texture and subsequently use that as a register source you can apply fp operations.

I’m not saying that’s what’s going on in the shader, I don’t know for sure what it’s doing or the precision it’s usin. Fixed point or integer stuff is possible too and you can get > 8 bit framebuffers.

mmmm… how are they doing the individual colors for each ball.
Doing an accumulative field sounds simple and explains the melting together look, but the colors?

Originally posted by dorbie:
Framebuffer blending isn’t but fragment programs are allowed. If you render to a texture and subsequently use that as a register source you can apply fp operations.

Eek. Are you saying they maybe do each ball separately, and do a glCopyTexSubImage per ball? shiver

It must be an accumulation buffer trick…

Geeze, no I didn’t say that at all you just said that. You also suggested fp, I didn’t.

Render to texture is not necessarily a copy.

Carmack suggests he does something like this to accumulate terms in Doom3, in one of his codepaths

It’s a D3D advert, not OpenGL

Does D3D even have an accumulation buffer? I don’t think so.

When I said accumulation earlier I was using the term literally, not implying the use of any accumulation buffer.

[This message has been edited by dorbie (edited 03-02-2003).]

No need to get upset. I just don’t understand how you are thinking that this “accumulation” would work and I was seeking clarification.

I would really appreciate it if you could explain, step-by-step, how you would accumulate 2D field values (presumably stored in a 2D float texture that gets billboarded) into a floating point buffer without using copyteximage or making that buffer an accumulation buffer? What I mean by copyteximage is that you would have to either copyteximage or render-to-texture and then send that texture back through the pipe once per metaball.

Thanks

Originally posted by dorbie:
Render to texture is not necessarily a copy.

No, it’s something much much slower.

No, it’s something much much slower.

Only on an nVidia card

I’m not as annoyed as I perhaps appear :-)The potential difference between copytexture and render to texture is clear, to take something I suggested and then say please explain how apart from A & B when one of them was what I explicitly stated (and certainly what I had in mind) seems less than fair. Once again I did not insist on any kind of FB representation. Feel free to offer your own speculation w.r.t. how your suggestions are implemented but I don’t see a point in being drawn on this.

I don’t see great value in drilling down to the detail of how something is done when even the general outline of an algorithm is a guess, much less get adamant about some detail.

It’s enough fun for me to speculate that the primitives are used to accumulate some field (or more than one buffer since such things are supported) that’s then fragment processed. Beyond that we could speculate ad nauseum about the details. I wouldn’t even insist that anything proposed so far is the way or the only way.

Sorry if this seems evasive but for me this is just a fun guessing game, idle speculation and throwing around interesting ideas. I’m not interested in taking a position, and I suppose therefore object to having a position thrust upon me.

The key observation for me is the limitation of accumulating a 2D screenspace field, i.e. it’s only really 2.5D if this is how it’s done and sharp SELF occlusion is not possible unlike a true 3D metaball algorithm. Using primitives to apply image based techniques seems to imply this inherent limitation to me, making this even more useless than typical metaball algorithms. If I have a position or an observation to offer then that’s it, rather than getting bogged down in implementation details.

[This message has been edited by dorbie (edited 03-02-2003).]

Fair enough I think I need to make some clarifications myself:

I wasn’t actually looking for someone to tell me exactly how Microsoft did that particular rendering, only a discussion on how it might be done.

The reason I kept pressing you on it is because it sounded like you had a pretty good idea about how one could do it. My confusion about the specifics is what led me to ask for clarification about whether you thought it could be done without repeated texture copying/render-to-texture. I didn’t mean to try to force you to come up with a new way that didn’t do that.

I completely agree with your assesment that this technique is probably 2D only (or, 2.5d as you say…it has apparent depth, but the balls probably can’t sharply occlude eachother). If it were fully 3d, you’d think they would have put some depth in the screenshot to show that off…plus there might be some hype about it replacing the marching cubes algorithm.

[This message has been edited by Zeno (edited 03-02-2003).]

I did a similar technique once (albeit without the fancy reflection mapping).

I accumulated a bunch of particles with a circular alpha fade texture into the alpha channel of the framebuffer. Then I copied this to texture and then drew a full screen quad with alpha test enabled and set to something between 0 and 1 as the cut off point. Definitely a 2.5D effect. It can be made to interact with a 3D environment by drawing the scene into the depth buffer first, then accumulating particles, etc.

I tried experimenting with taking the accumulated alpha texture and using it as a heightmap to generate offsets into a fullscreen texture of the scene to generate a distortion/water effect, but there were some serious issues with precision thanks to 8-bit precision. I still have the demo lying around, if anybody wants a look, let me know.

j