What IS anysotropic filtering?!

What I’m asking is wht type of filter is it? Not the way it’s works… Is it another type like bilinear or trilinear, or more like compression? Can one use anysotropic with, for example, trilinear filtering? And what about anysotropic with compression? I believe this last one is not very bright because if you want quality, you use anyso and forget compression, and on the other and, if quality is not important, you should not use anyso an compress the tetures…I’m right?

What is the best way, the proper combinations, to use all these options together?

I don’t find nothing on NVidia papers that can help…

Thanks…

Good question. I’d like to see info on all these things too. I can’t remember weather I saw something about aniso + tri-linear filtering, being redundant or something… dunno…

Maybe a faq on all these, which combinations can be used, and how it affects performance from nvidia or ATI would be nice.

Nutty

http://www.anandtech.com/guides/viewfaq.html?i=36

Originally posted by KRONOS:
And what about anysotropic with compression? I believe this last one is not very bright because if you want quality, you use anyso and forget compression, and on the other and, if quality is not important, you should not use anyso an compress the tetures…I’m right?

It can be quite usefull to use compression and anisotropic filtering in the same texture. For example you might use compressed textures for ground of your landscape, and then activate anisotropic filtering to decrease unnecessary
blurriness from the ground when your viewpoing is at low altitude (like when you are standing on the ground)

The improvements you get from the filtering are not negated by the texture compression.

     Eero

there a good document somewhere
anyways heres the short of anisotropic
with mipmapping a texture will go 32x32 -> 16x16 8x8 etc
but with anisotropic filtering u ould have 32x32 -> 32x16 -> 32x8 etc
why is this useful?
take a book place it in front of your eyes
slowly tilt it till its flat
notice how it still appears as wide as it was when it was parallel to your face but when its flat the height has become less.

The problem with isotropic filtering is that it always samples from a 2x2 texel square region of your texture.

Look at this demo: http://developer.nvidia.com/view.asp?IO=Show_Footprint – it shows the real footprint of a block of screen pixels in the texture. You’ll notice that as soon as you start to rotate the textured surface, the footprint is sheared, and quickly loses its square shape.

An ideal texture filtering scheme would calculate the footprint of each pixel in the texture, and sample all of the texels inside the footprint. The standard isotropic filter, however, completely disregards this and sticks to the 2x2 texel square for sampling.

Anisotropic filtering, by comparison, takes a larger number of samples from the pixel’s footprint (I believe this is what the term “taps” refers to, i.e. 8-taps = 8 samples). The number of required samples in the ideal case is unbounded, but at least anisotropic filtering allows you to get a better approximation than the standard linear filter.

– Tom

[This message has been edited by Tom Nuydens (edited 04-16-2002).]

Originally posted by zed:
but with anisotropic filtering u ould have 32x32 -> 32x16 -> 32x8 etc

That’s ripmapping. Ripmapping has an anisotropic like effect, but wont work when viewing polygons at an angle close to 45 degrees in texture space.

thanks for the correction Humus

Ripmapping…teehee

From http://www.anandtech.com/guides/viewfaq.html?i=36

" If the ground is at an angle on your screen, anisotropic filters based on that angle. It works on the space the object occupies in the 3D scene."

How does it do that? I thought that anisotropic was based on screen space, meaning a filtering is done using the texels that fall on a specific pixel. Or maybe this is just another method among hundreds…

V-man

well… as the rastericer has the derivates for interpolating x,y and z-components, it can do this very easy:

take the midpoint of the pixel… say we want 4 surrounding samples… just add those derivates for the texcoords in screenspace. that will result in a screen-space-quad… wich will not be a quad in texture-space except if its parallel to the screen…

=> the 4 texcoords we want to sample…

now how you place your samples is your problem…

btw, using 32x aniso, combined with 4x fsaa results in 128 samples perpixel… and each one of those gets calculated…

why did nvidia not implemented looping in the pixelshader? they could do 128 loops instead (hint to siggraph2002 raytracing on current hardware paper)

Originally posted by davepermen:
why did nvidia not implemented looping in the pixelshader? they could do 128 loops instead (hint to siggraph2002 raytracing on current hardware paper)

Hmmm, because loops kill Pipelining?

not really, they could simply put the result in some queue with a buffer for 128 colors (512bytes big it would be, or 1k with stencil and z)
and simply go through the queue severall times (they can process this queue quite pipelined anyways, so no bigger problem… sure it NEEDS some change, but i guess nvidia can do this quite fast )

zeck,

Looping does not kill pipelining, it just makes it harder. Perhaps they wanted to spend their transistors elsewhere for the current hardware generation.

As you define the language, you can make looping using primitives that predict and pipeline very well; you don’t necessarily need to support arbitrary if() statements. One example is a fixed number of iterations, specified in a per-pixel parameter which comes out of the vertex shader.

We’ll get there in the end, I’m sure.