Whatever happened to the F-Buffer

Did they figure out that it’s just a big register cache and move on, did hardware developments like huge instruction counts make it obsolete or did they figure out that it doesn’t help with interesting stuff like app attributes and textures without multipass at least in the driver layer.

Or is it actually in there under the covers making ‘unlimited’ shaders possible?

Any News?

I have no idea what IHVs think about it but IMHO there is no point. We have virtually unlimited shaders now (as we are unlikely to cross the limits), and emulating really unlimited shaders will probably turn out too complicated to actually have some practical use. Large shaders = slow performance. I don’t expect any hardware vendor to implement such feature.

Sorry for throwing my 50 cent…

I was going to say it went the way of the Bachman’s Warbler, but I don’t think it ever had wings.

I really like the idea, what I understand of it, except perhaps for the vagaries and hand waving around the whole fast/generic/cheap solution for the memory spillage thing :wink: (I just skimmed the siggraph paper.)

By the by, is that Mesa implementation available somewhere? I’d like to see it in action, even if it is in software.

Wasn’t the 9800 supposed to have one of these, but it was never exposed? It might not be useful on newer cards, but it had a pretty low instruction limit and could stand to benefit. On a side note, what about the 1xxx series cards never getting OpenGL support for FP16 AA?

At a high level, I think the f-buffer is an interesting idea, but making them work robustly without a lot of software intervention is tricky.

The problems that the f-buffer was supposed to address have been reasonably solved in other ways. Or the resources they proposed to virtualize didn’t really need full virtualization.

So that leaves a less compelling argument for implementing them today.