questions - updates, particles, multimons etc.

I have several questions, I hope they’re advanced enough for this forum…

  1. An educated guess of will MS ever update VC’s headers/libs to support the later OpenGL implementations?

  2. If you’re doing particle systems, is there a more efficient way than manually pushing the quads around with glVertex*() calls and somehow packing them in an array - or perhaps using a vertex program?

  3. Can you have multiple monitors coming off a single graphics card - more precisely, what about multiple tft flatscreens (through DVI outs preferably)?

  4. What would be an efficient way of managing surface decals - I see Half Life maintains a truckload of them all the time - any ideas how that might be done?

  5. Graphics card memory space for textures - if I have a 64MB GF3 card, do I get 64MB for textures (I mean, textures that won’t be stupidly slow coming from main memory…) or will I have less depending on the framebuffer settings?

Thanks for any hints/answers.

-Mezz

  1. Never

  2. Vertex arrays, definitely. These are properly explained in the spec, I think. Stack with CVA, DRE/VAR for higher throughput. As for vertex programs, I think they offer nothing too interesting for particles.

  3. Ask NVidia, ATI and Matrox. They all have multi monitor stuff.

  4. I’d probably try to soft render decals in memory and use them as texture overlays. No idea how that might perform though.

  5. Frame buffers, geometry buffers and textures obviously share the same memory. 64MB=64MB, no matter how you look at it.

Thanks for the sharp reply, but I don’t understand what you mean by ‘soft render’ with regards to decals.

When you say geometry is shared in that memory too, do you mean GL vertex arrays or something obtained with one of the wglAllocateMemoryNV*() calls? or something else entirely?

-Mezz

Originally posted by Mezz:
[b]Thanks for the sharp reply, but I don’t understand what you mean by ‘soft render’ with regards to decals.

When you say geometry is shared in that memory too, do you mean GL vertex arrays or something obtained with one of the wglAllocateMemoryNV*() calls? or something else entirely?

-Mezz[/b]

  1. I think he was saying that if you want to apply a decal (or hundreds of them here and there in your scene), instead of using GL_QUADS or whatever for each decal, do the decaling directly onto the texture (wall covered in blood or alien juice) and this might speed up performance. Maybe the render to texture extension can help here.

  2. There are a lot of buffers in video memory: desktop (front buffer), back buffers (or shared among opengl windows), stencil, vertices, colors, normals, textures, …
    So you dont have 64MB of texture memory!

V-man

about the particles, if you talk about billboards, then yes, even those you can store in a vertex-array… and even further, you can use a vertexprogram to billboard them automatically… its sort of a fake but it works (else you should look in the hopefully soon in every driver implemented GL_NV_point_sprites, they are sort of hardwarebillboardparticles )

for the vertexprogramstuff: www.nutty.org
he has a demo online

  1. I think he was saying that if you want to apply a decal (or hundreds of them here and there in your scene), instead of using GL_QUADS or whatever for each decal, do the decaling directly onto the texture (wall covered in blood or alien juice) and this might speed up performance. Maybe the render to texture extension can help here

What? Render into the lightmap texture, you mean? Otherwise, you’d have to have individual textures for every surface in your world! Maybe with the 128mb in the gf4…

Yes, I was talking about billboards - I think they’re pretty good for stuff like smoke/blood etc. which is what I’m doing - however it does annoy me when they go into walls…
I was trying to keep the particle setup fairly general (i.e. core OpenGL) so I’ll probably use vertex arrays, but I might possibly look into NV_point_sprite for future usage - is it good?

-Mezz

The reason why I said that vertex programs won’t help with particles:
They operate only on one vertex and can’t access others. They don’t affect primitive assembly and can’t create new vertices.

This means that you can’t use them to simplify your particle geometry at all. You still have to send properly aligned tris or quads. As VPs are agnostic of the primitives they work on, they can’t correct degenerates (would be nice if you could send the same vertex three times and have the HW automatically expand that into a small triangle), they can’t rotate the particles into eye plane alignment either. Normals are insufficient for this and are quite frankly overkill for particles.

For particles I’d recommend looking into point sprites or EXT_point_parameters if a bit less grooviness is acceptable.

Don’t know if this is relevant, but you can use the attribute arrays to pass in neighbouring vertices, if you use discreet triangles rather than indexed.

zeck,

You can pass in 4 vertices of the same coordinate, and use indirect addressing, or a secondary data array, to figure out which of the 4 vertices each vertex copy is intended for.

Then the vertex program can use the inverse modelview matrix to view-align the quads for you, with no intervention on the CPU. That’s a good thing. Of course, you could instead calculate the offsets once per particle system, and make the particles face the screen plane, instead of facing the viewer exactly, to minimize CPU work even if you don’t do that.

I stand corrected. Thanks for the lesson

Jwatte, I used the identity modelview matrix for the VP billboard demo, and it worked fine. No need for inverse modelview’s.

Nutty