ATI says thier is no Trilnear filtering defintion??

I always happend to think the GL_LINEAR_MIPMAP_LINEAR was real trilnear but ATI says they have made up a new trilinear which is better then the GL_LINEAR_MIPMAP_LINEAR ( see pages 151/152 of the OpenGl 1.5 Spec ).

If you read this quote from a recent interview they said their trilnear was better the normal trilinear

gs.
When will ATI restore full trilinear so that review sites can actually rebench and retest your cards, since any previous review benchmarks is invalidated by this cheat/optimisation/whatever?

Andy/Raja
We have never removed “full trilinear”. We certainly do not believe that any benchmarks have been invalidated by our techniques. In all cases reviewed so far we believe that we have higher image quality than other implementations.
Is their anyway to get this better trilinear on other cards??

ATI is talking about what appears to be some kind of adaptive LOD calculation based on the image content that adjusts the blend transition to save performance (AFAIK), not necessarily look better.

Trilinear filtering has been mucked with for some time on cards. If you consider trilinear filtering where the blend between MIP map levels is determined by the calculated MIP LOD for the texture fragment, almost all of your pixels have some blend between two texture levels of detail. Strictly speaking this isn’t needed to eliminate most aliasing artifacts while giving a smooth blend between texture levels. This means you could shrink the transition region where you blend between two MIP LODs to be nearer to the middle of each MIP transition. For example instead of blending from 1.0 to 2.0 levels of MIP, you blend from 1.4 to 1.6 and do a simple single LOD bilinear texture everywhere else increasing your texturing performance. This may still be considered trilinear MIP mapping. This is the kind of thing all PC card makers have been doing for years especially on performance settings.

I think what ATI has implemented is a system that does something similar to the above automatically based on the image contents of the MIP map levels where depending on the image the bland region could vary.

This has caused a PR problem because some quality test software uses bland primary color images and completely different contents between MIP lods apparently defeating ATI’s technique.

I’m not 100% clear on what their optimization is, but trilinear is trilinear, all vendors try to cut corners and optimize this and in the end it should be about what looks best and runs fastest.

Trilinear MIP mapping is not the best texture filter by any means, it’s just a popular and efficient one that is pervasive now. It should be possible to improve on it (and of course anisotropic texturing does this for example) but that doesn’t mean ATI has improved on it. It probably means they can introduce optimizations more aggressively for certain textures at a given quality level without degrading the appearance noticeably, it’s probably not a bad thing to attempt.

yet it is somewhat puzzling how their press presentations seem to have been over 50% nvidia bashing over their brilinear filtering, promises of “real” trilinear for max quality and even advising to use colored mipmaps or different tools to see the different in quality (knowing, that their optimization wont be “on” in these cases).

but then, who ever believed a single word coming from a companies marketing people?

maybe if they get enough complains they will sooner or later add a switch to their driver settings to turn it off (lets hope turning it off wont just hide it better).