Originally posted by dorbie:
[b]From hints given in personal correspondance hardware implementations exploit the screenspace 2D interpolation linearity of a zbuffer (caused by non linearity w.r.t. true eye Z) for features like coarse z buffer optimizations.
So the costs aren’t fully clear anymore, there was a time when folks suggested (I think mainly at 3Dfx) that hardware was up to task of using perspective correction for pathological choices for near and far clip. Now that may not be the case because it complicates popular optimizations like coarse z.
I like the non linearity of a zbuffer for reasonable clip plane values, it really only sucks when you crank the near plane in too far. There are other schemes besides w buffering that try to correct for this too.[/b]
so i take it that w buffering is not a commonly supported feature, even though D3D seems to have a built in functionality to facilitate it?
i read something before about W buffering in another forum here. i will probably try to find it, even though it sounds like a lost cause. it seems like popular games utilize it, but they may’ve been games from software rendering days.
my basic problem occurs not when the near plane is too small, but when the near per far ratio is too small. so like 1/100 is the same as 10/1000.
i’m working at the scale of a small planet, only the planet is cylindrical, so that the horizon curves up, meaning you don’t get the benefit of distant lands being lost below the horizon with curvature.
basicly the near plane is at which point the camera clips through the space… so it is the only non relative judgement of scale.
i really need to drop the size of the near plane or increase the size of the environment so that the camera can go down to a human scale.
around a 0.1 type near plane swimming seems to occur. but if you push the near and far up and the same ratio, the swimming goes away, but there is a pretty well defined point were depth values begin to merge and then actually…
i’m actually seeing a weird effect i don’t understand. it must have something to do with the z buffer though. basicly imagine rendering two cylinders one nested inside the other. the diameters are relatively close… one is actually terrain, then another a layer of atmosphere.
the atmosphere is the inside cylinder. the camera is inside the cylinders. now the crazy thing happening is the further away the surfaces are along the z axis. the inner cylinder begins to intersect the outer cylinder, and then actually goes behind the outer cylinder.
how is this possible?
it has nothing to do with the modelview matrix… if you move to the point were the cylinders invert then the effect moves.
i really don’t understand how say two identical geometries, one scaled slightly smaller, can totally intersect until the smaller is totally outside the larger increasingly in the z axis?
any thoughts?
From hints given in personal correspondance hardware implementations exploit the screenspace 2D interpolation linearity of a zbuffer (caused by non linearity w.r.t. true eye Z) for features like coarse z buffer optimizations.
oh, so w buffering can’t be linearly interpolated in screen space? still by this day and age, it seems like dedicated hardware doing a per pixel depth calculation could be fit in somewhere couldn’t it?
just to show i don’t get out much… what does w.r.t. mean btw?