Translations within one batch

I am looking into the possibilities to translate subsets of vertices within one draw batch right now in order to reduce the amount of batches.
But I couldn’t find very valuable documentations on this yet.
So it would be nice if you could share your sources (online, books, etc.) about this. Thanks!

I find it very frustrating that there is still no straightforward technology to do this.
Especially considering that this is an increasing fundamental performance problem.
Something like Hardware-/API handled translation trees would be extremely useful.
Using vertex programs for this is just a mess in my opinion.

By the way, do you know about any plans by ATI/NV to implement 32 Bit depth-buffer support?
It really sucks that we still have to use over 8 year old (!) precision that is equal to a -100 dioptres sight.

But HEY! Why worry about things which really matter if we have cool sounding stuff like HDR!?

Originally posted by holdeWaldfee:
I am looking into the possibilities to translate subsets of vertices within one draw batch right now in order to reduce the amount of batches.
Depending on the context, you could pre-transform this or add another stream containing compressed per-vertex displacement. What are you trying to do?

Originally posted by holdeWaldfee:
Especially considering that this is an increasing fundamental performance problem.
Something like Hardware-/API handled translation trees would be extremely useful.
Using vertex programs for this is just a mess in my opinion.

If you mean something like instancing, I believe it’s foundamentally different from your problem (and I’m not sure of what you’re doing).

Originally posted by holdeWaldfee:
By the way, do you know about any plans by ATI/NV to implement 32 Bit depth-buffer support?
It really sucks that we still have to use over 8 year old (!) precision that is equal to a -100 dioptres sight.

I agree a higher precision Zbuffer would be great considering other parts of the pipe have been widely overhauled but I fear this isn’t going to happen soon. It’s likely you’ll have 48-bit Z when GPU start supporting double precision but I don’t think this is on the horizon.

Originally posted by holdeWaldfee:
But HEY! Why worry about things which really matter if we have cool sounding stuff like HDR!?
In my opinion, HDR is one of the most important things recently introduced. I suggest you to take a look at High Dynamic Range Imaging by Debevec and others. I believe HDR improves the user experience much more than a higher precision Z by sure means.

To explain a bit better what I meant:

Right now I have to break up a model into a lot of batches because I want to buffer the vertex data on the graphics card without updating modified vertices (for performance reasons).
This problems results into several tenthousand batches in my planned scenes - and thats a huge performance problem as you can imagine.

What I want is one draw call for a whole model.
Right now I would have to use a vertex program to make this possible.
But I haven’t found a good way to do this yet.

In my opinion, HDR is one of the most important things recently introduced. I suggest you to take a look at High Dynamic Range Imaging by Debevec and others. I believe HDR improves the user experience much more than a higher precision Z by sure means.
Well, I disagree. :wink:

HDR only got a good reputation by users because the first implementations of bloom effects came together with HDR. Differing between HDR- and non-HDR scenes with equal content would cause a lot of yawning users in my opinion.

Originally posted by holdeWaldfee:
[b] [quote]In my opinion, HDR is one of the most important things recently introduced. I suggest you to take a look at High Dynamic Range Imaging by Debevec and others. I believe HDR improves the user experience much more than a higher precision Z by sure means.
Well, I disagree. :wink:

HDR only got a good reputation by users because the first implementations of bloom effects came together with HDR. Differing between HDR- and non-HDR scenes with equal content would cause a lot of yawning users in my opinion. [/b][/QUOTE]We havn’t even begun to see the full potential of HDR yet.
And yes with equal content the difference between hdr and non hdr is not that great, but if you make HDR-only content and effects, then you will start seing amazing stuff.

32bit depthbuffer support, don’t they allready have that?, either way, unless you totaly screw up the zfar/znear ratio, even a 1024bit z-buffer won’t make it look better.

I don’t think there are >24 bit zbuffers (on consumer/gamer video cards).

32bit depthbuffer support, don’t they allready have that?
They only have 24 Bit depth + 8 Bit stencil.

, either way, unless you totaly screw up the zfar/znear ratio, even a 1024bit z-buffer won’t make it look better.
It is an exponential increase of precision. So it would matter.

32 Bit depth precision would finally allow a natural range of view on non-billboard objects without very noticable z-fighting.

More than 24bit Z isn’t very useful as long as the vertex pipe is only FP32. That’s a 23bit mantissa.

And how much will a 32 bit zbuffer cost? You’re talking about a different architecture here, one that is likely more expensive to fabricate than the current one. Not usually a good thing for consumer boards.

Plus, you don’t have industry heavyweights lobbying for this, least not as far as I can tell. You need to get big ISVs doing the one-legged 32bit zbuffer rain dance (if the price isn’t right, might not make any difference).

I seriously believe that although many of the recent advancements in the in-game visuals look pretty nice, they are not an alternative to good gameplay. If HDR were used in some smart gameplay like HL2 used physics to improve their gameplay (rather than just having dumb ragdolls lying all around), i would have been much happier.
And yes, i want 32-bit depth buffer, but i also agree that a 24-bit depth buffer is sufficient for more than 95% of the tasks especially on consumer grade hardware.

The development of gameplay is clearly going towards much larger scenes with higher range of view.
Try to draw a object with layered faces (like for trees) at znear*2000 and you definitely get very noticable z-fighting.
You can see in countless games where artists ran into this problem and had to remove content because of this.

Anyway, can anyone help me out a bit on the translation issue?

Thanks!

Could the effect you need perhaps be accomplished using vertex_blend ?

You can’t compare HDR and non-HDR with equal content.

That’s the whole point with HDR, that you can have arbitrary content (in terms of light sources), while without HDR you have to design around the limitations (e.g. avoid saturation).

And content specifically designed for HDR (without the “old” limitations) looks a lot better. Especially, it looks a lot better near the camera. IMHO this should be more important than optimizing far away parts of the scene.

HDR is not about making existing scenes look better, it’s about removing design limitations, so the issue is not so different from increasing the depth buffer precision. Things like bloom or diffuse environment mapping are side effects.

Of course I’m not saying that we don’t need higher precision depth buffer, I’m just saying where the priorities should be. I’d rather see fast float MRT implementations (preferably 32 bit with blending) so things like deferred shading (for extremely high light source count, again lifting restrictions on the content designers) become feasible, than having a better depth buffer, which will “just” increase possible view distance.

Originally posted by tamlin:
Could the effect you need perhaps be accomplished using vertex_blend ?
Thanks for bringing this back to my mind.

It seems however that this old extension has some problems with IHV support.

I am not saying that things like HDR are useless.
My opinion however is that there are much more important things to do right now.

You can retouche graphics with fragment programs and whatever all you want. It doesn’t make bad things like z-fighting go away.
Look at new games like Oblivion, Battlefield2, Crysis and so on. They all have really bad z-fighting problems, even though the artists already limited their content to reduce this.

Why not start with the fundamental things?

Do you have any screenshots of that? I’m not aware of any Z-fighting problems in any of those games. In fact, I can’t remember when I last saw a game have Z-fighting problems. But I’m sure you’ll be delighted about the DXGI_FORMAT_D32_FLOAT format that’s in DX10.

And yeah, HDR >> 32bit Z :slight_smile:
I’ve even started playing with HDR in photography. It’s wicked cool!

Originally posted by Humus:
[QB] Do you have any screenshots of that? I’m not aware of any Z-fighting problems in any of those games. In fact, I can’t remember when I last saw a game have Z-fighting problems.
For example in Battlefield2:



It’s far worse in motion of course.

Try to draw things like detailed trees at high ranges.
You get horrible z-fighting.

It is a HUGE issue if you want to make a game with wide open scenes (and the market demands this more and more).
And this becomes even worse if you want to use some more details in the scenes.

But I’m sure you’ll be delighted about the DXGI_FORMAT_D32_FLOAT format that’s in DX10.
I don’t have many informations about D3D10.
Does this really result into higher depth buffer precision?
Will it be available for OGL too?

those screenshots have one thing in common, can you spot it children?
there are tricks to avoid these problems, like if you’re going to render with a very narrow field of view you should apply a uniform scale to the scene to bring distant objects into a higher precision part of the depth range (seeing as though any z fighting on foreground objects isn’t going to be noticed at that point).

Originally posted by knackered:
those screenshots have one thing in common, can you spot it children?
yes. they are all displayed two posts above this one. eh, eh, eeeehhh… :smiley:

c’mon don’t make it so thrilling, knackered—TELL ME WHAT IT IS!!!

they are all SNIPER ZOOOOOOOOMED.