Depth range equation

In keeping with the recent effort to ease transitions from DX, how about a user defined depth range equation,

glDepthRangeEquation(GLclampf a, GLclampf b),

where the final Zw = a + b Zd and a + b = 1.

For DX, a = 0 and b = 1. For GL it’s business as usual with a=(n+f)/2 and b=(f-n)/2 (usually a=1/2 and b=1/2).

May need a GL_DEPTH_RANGE_EQUATION enable to change the sense of the parameters to the existing DepthRange or to otherwise indicate the desired behaviour in the presence of the two coexisting APIs.

Hurl your finest leafy greens…

On further reflection… glDepthEquation is probably a better fit, as a decoupling of equation and range.

I suppose ideally we’d have a full blown user defined viewport transformation.

If further justification is necessary, consider that glFrustum and all fixed function transformation functions have been completely removed from the API, as of 3.2. Thus it now seems somewhat inappropriate to make assumptions about quantities derived from them (i.e. that post projective z is in the range [-1, 1]).

Thus it now seems somewhat inappropriate to make assumptions about quantities derived from them (i.e. that post projective z is in the range [-1, 1]).

So long as rasterization is largely fixed-function, there will always be some basic assumptions that it makes about the input triangles.

Well, as a simplification, this could be reduced to a single enable/disable of GL_DEPTH_TRANSFORM, which would be enabled by default but when disabled would simply forgo the viewport z range transformation.

As far as I can tell there are no other dependencies on the existing range, spec wise.

I think you got that backwards. Clipping is defined as including anything between -Wc to Wc, and based on that definition glFrustum and similar functions need to generate a projection matrix with specific values, not the other way round.

To be compatible with DirectX it is not enough to simply redefine the viewport transformation, you actually need to change the definition of clip space. The clip space depth range is [0, Wc] in DirectX and [-Wc, Wc] in OpenGL. There’s actually a very good reason to prefer the DirectX way: floating point precision is highest around 0, and the OpenGL viewport transformation, as it involves an addition, kills almost all precision benefits a float depth buffer has over a 24-bit fixed point depth buffer.

Probably, and the first time be it would not.

I like your rationale even better, though in all honesty I’d be satisfied with any reasoning that leads to fruition.

By god we oughta at least have some agreement on basic boiler plate graphics math.