We know that anisotropic filtering isn't in core OpenGL for a reason: IP issues. We also know that this is slightly ridiculous as hardware support is ubiquitous. So how can we get anisotropic filtering into core? Simple.

Relax the specification so that GL_MAX_TEXTURE_MAX_ANISOTROPY is allowed to be 1.

From specification, setting GL_TEXTURE_MAX_ANISOTROPY to 1 disables anisotropic filtering; I quote:

When the texture's value of TEXTURE_MAX_ANISOTROPY_EXT is equal to 1.0, the GL uses an isotropic texture filtering approach...
So with GL_MAX_TEXTURE_MAX_ANISOTROPY allowed to be 1, the only legal value of GL_TEXTURE_MAX_ANISOTROPY which may be set is that which disables anisotropic filtering, and IP requirements are satisfied.

It's ugly and hacky, yes, but there is precedent: GL_ARB_occlusion_query allows GL_QUERY_COUNTER_BITS_ARB to be 0 so that implementations which don't actually support occlusion queries can implement it; see http://www.opengl.org/archives/about...003-06-10.html (look for "ARB_occlusion_query"). This proposal is just for doing the same but with anisotropic filtering.

Is it right to be proposing something ugly and hacky? Probably not, but it seems to me that the benefit of finally getting anisotropic filtering in core would outweigh the negative.