sorting front to back

front == nearer to camera
we’ve read that sorting front to back offers a speed improvement (cause the back polygons often will be covered up by polygons nearer to the camera)
ok this is logical, right.
but what happens if u lay down a depth pass first
then my reasoning is then draw order is totally irrelavent.
is this is assumption sound?
cheers zed

To me that seems to make sense. Though I’d still think doing the sort for the depth pass would be a good idea. Wouldn’t it?

-SirKnight

Wouldn’t that end up sending the geometry down to the GPU twice? Even in the case of VBOS where most of the data is cached onto the graphic board, calling data pointers (texture, color and especially vertex) would cause more of a performance hit than a few distance check before drawing once.
My $0.02

To me that seems to make sense. Though I’d still think doing the sort for the depth pass would be a good idea. Wouldn’t it?
true but the depthpass is normally very cheap eg colormask(0,0,0,0)

Wouldn’t that end up sending the geometry down to the GPU twice? Even in the case of VBOS where most of the data is cached onto the graphic board, calling data pointers (texture, color and especially vertex) would cause more of a performance hit than a few distance check before drawing once
quote are u talking about laying down an extra depth pass first?
then yes there is extra drawing involved and i wouldnt recommend it if youre doing just simple gl stuff but if youre gonna be drawing multiplass aferwards eg lighting with shaders than having a depthpass first does improve performance

Yeah I was talking about the extra depth pass drawn out first.

Originally posted by SirKnight:
[b]To me that seems to make sense. Though I’d still think doing the sort for the depth pass would be a good idea. Wouldn’t it?

-SirKnight[/b]
If I understand correctly all what you all said, then I totally agree. There could have some burdens not to sort regarding depth.

When you do multipass lighting with an ambient pass, this pass would be rather simple, so this can be used instead of a depth-only pass…

When you do multipass lighting with an ambient pass, this pass would be rather simple, so this can be used instead of a depth-only pass
yeah this is how i used to do it,
until i realised ambient is the devils spawn
now its direct lighting all the way

I was expecting you initiated this thread Zed :slight_smile:

Z only first pass can be used to draw ambient, but it is used to draw emmissive too. You’ll end up enabling your colormask someday, even for other purposes (post processing effects, moving some heavy computations to the first pass and then reading it back, etc.)

Obviously, sorting has some interest. First, you can have a more or less faster first Z pass, but then, you can reverse the sorting to draw back to front alpha blended models. It won’t make any improvement as long as you don’t write anything in the Z-only pass and you don’t draw alpha blended polygons.

This kind of sorting can have a frame or two of lag, so you’re not forced to update each frame. My personnal digging showed up that a very rough bin sorting will give correct visual results.

You cam implement object BSP quite easily too, as long as your geometry stays static.

SeskaPeel.

thats an idea that i never thought of, sticking ambient with the depth though at first glance i feel just keeeping all depth stuff seperate being better (shadows etc)
i have now implemented a new drawing method in my game which gives nice results for the fog (not the ligthing which will absolutly kill framerate i think)
though ive stuck a flag in the options so u can turn it on/off

imagine this horror senerio
alphablended fire like particles giving off blended smoke
A/set fire state
draw furtherest away particles until u hit a smoke particle
B/set fog state + draw fogged fire
C/set smoke state draw smoke particles until u hit fire particle
D/set fog state + draw fogged smoke
goto A) + repeat many times

ie u end up with easily possible >200 state changes in a small area

Yes, and how would you avoid those state changes?

Originally posted by knackered:
Yes, and how would you avoid those state changes?
you cant
i might change to a hack instead where i draw each particlesystem in it entirety + then fog it then switch to the next closer particlesystem, visually it wont be correct but a lot less statechanges.
damn this curse-said fog,
im thinking about switching back to my old method where the game world was round (no fog required the horizon does the hiding) visually it look very cool as well flying around an actual round planet.
the reason i changed back to a 2d world was for gameplay reasons, it took forever and a day to fly around the 3d surface (gameplay over graphics)

Originally posted by zed:
but what happens if u lay down a depth pass first
then my reasoning is then draw order is totally irrelavent.
is this is assumption sound?
cheers zed

For all passes after the depth pass it’s indeed irrelevant. However, for the depth pass it’s still going to matter. It may not always be a big deal, but if you can do it without spending a huge amount of time on it you should definitely sort it. Not on polygon level, but at least sectors and stuff. And always draw the skybox last. :slight_smile:
I’m working on an engine at the moment and when I did a rough sort for my depth pass (ambient + emmisive) I saw a quite decent speedup, like 5-10% or so, don’t recall exactly.

Originally posted by Java Cool Dude:
Wouldn’t that end up sending the geometry down to the GPU twice? Even in the case of VBOS where most of the data is cached onto the graphic board, calling data pointers (texture, color and especially vertex) would cause more of a performance hit than a few distance check before drawing once.
My $0.02

High-level object sorting won’t help with self-occlusion though. So a depth pass is certainly helpful even when you can easily sort objects. What’s best is to do a high-level sort for the depth pass, then do all the lighting in later passes (order irrelevant). Unless of course if you’re vertex shader bound.

Originally posted by zed:
but what happens if u lay down a depth pass first
then my reasoning is then draw order is totally irrelavent.
is this is assumption sound?
cheers zed

This technique is recommended by Microsoft when programming for the xbox 360.

The cost of sending the geometry twice is usually insignificant compared to the time saved on fragment shaders. Also, the time saved will only increase as the shaders get more complicated.

Still need to handle the alpha blended stuff separetely though…

/A.B.

but what happens if u lay down a depth pass first
then my reasoning is then draw order is totally irrelavent.

On NV hardware “z-fill only” is twice faster than “z+color fill”. For next passes zorting is irrelevant because of “early-z” test.

yooyo

Originally posted by yooyo:
[b] [quote]
but what happens if u lay down a depth pass first
then my reasoning is then draw order is totally irrelavent.

On NV hardware “z-fill only” is twice faster than “z+color fill”. For next passes zorting is irrelevant because of “early-z” test.

yooyo[/b][/QUOTE]I’ve read this a million times about NV hardware but I do not recall reading what ATI does. Do they act the same way as NV cards do when doing z only? As in 2x speed render. I assume they do though I don’t think I’ve seen this written anywhere.

-SirKnight

The cost of sending the geometry twice is usually insignificant compared to the time saved on fragment shaders. Also, the time saved will only increase as the shaders get more complicated.
Well, let’s put that in some context : The cost of sending the geometry twice is insignificant ONLY if you’re exceptionally GPU bound on the fragment shader, and you’re working with a dense enough scene that front-to-back sort doesn’t really improve your rendering. (ie the sort can’t determine what pixels will / wont be redrawn)

For example, a standard RTS saves pretty much nothing by a Z-only pass, even though the shaders can be just as complex (and fill up a huge amount of screen area) a z-only pass will really only save you on terrain pixels that will be redrawn by a model later. (In which case, rendering the models first in your pipeline will solve this (alpha objects aside))

IMHO the z-only is line the new dx9 instancing - Great idea and quite powerful if used in a specific problem space. But in general, just know your pipeline.

~Main

Colt “MainRoach” McAnlis