PDA

View Full Version : Anisotropic Decimation



skynet
03-02-2006, 06:45 AM
Hello,

Iīm trying to use anisotropic decimation in order to scale down an image. The downscaled imaged is used to calculate the average luminance in a scene. Thatīs why the whole process to be as exact as possible. I already implemented the downscaling for NPOT textures using the bilinear filter. This is done by repeatedly downsizing the image by 1/2 in each dimension.

Now I wanted to use anisotropic decimation at steps where one of the image dimensions is perfectly divisble by 4 and the other by 2.
This gives an anisotropy of 2, thus the gfx card should do 2 texture samples for each output pixel.
It means, you could downsize a 4x2 image to 1x1 in one go. And if I have understood anisotropic texture filtering correctly, the output pixel should carry the average of the source 4x2 pixels (which is what I need).

Hereīs my problem: I have implemented code to check the render-to-texture generated avg. luminance against a value calculated on the cpu. In some situations the cpu-calculated luminance differs much from the gpu-calculated value.
If I turn off the usage of aniso. dec. everything is ok.
This leads me to the conclusion that the filtering does not give me the average texel value for the texture footprint of each pixel I render.
So, whats wrong?

I can assure, that
1. the attempted downsizing is anisotropic
2. the texture min/mag filter is set to GL_LINEAR
3. the max. anisotropy for the textures is set to 4
4. the longer source image dimension is divisble by 4, the shorter by 2 (in order to have perfect texel-to-pixel match)

I am using a GF6800GT with current drivers. The only thing I might have overlooked is, that my gfx card might not support anisotropic texture filtering for FP16 textures???

Any hints are welcome.

dorbie
03-02-2006, 09:42 AM
Differences could be caused by color space or plain gamma filtering on the card, that's the kind of quality issue you never get to see the mechanics of but you're obviously using linear physical values but that card may be filtering with the assumption that the display is sRGB to improve the visual quality.

I'm not saying it's definitely what's happening for sure but a really high quality implementation would have to do this for correct visual weighting on most PC displays.

You could gamma correct your values on the way in and ungamma them after the readback (actually I think it's the other way around so the card's correction places your values in your numerically linear space) to make the arithmetic right for you (assuming that's the problem).

dorbie
03-02-2006, 10:01 AM
P.S. It could be something completely different, if one axis is 1:1 and the other 2:1 you're going to get filtering on the other axis too depending on MIP filter.

You need to set anisotropy or force it in the driver. Then you prodabaly want to set NEAREST_MIPMAP_NEAREST min filter and NEAREST mag filter to stop the probes filtering linearly or trilinearly. This should give you a purely anisotropic summation.

Of course you should also ensure that the texels on your quad are appropriately and exactly pixel aligned/scaled.

skynet
03-03-2006, 03:38 AM
I donīt think the problem is directly related to any gamma correction.
My concern is that the problem shows especially in certain "image configurations". For instance, if the image is basically divided in 3 vertical stripes of luminance: one bright stripe in the middle, two darker left and right to it. And since the aniso. decimaion steps are mainly done in horizontal direction, this configuration seems to be problematic.

P.S. It could be something completely different, if one axis is 1:1 and the other 2:1 you're going to get filtering on the other axis too depending on MIP filter.I donīt use mipmapping.

You need to set anisotropy or force it in the driver.I explicitly set the maximum anisotropy with EXT_texture_filter_anisotropic

Then you prodabaly want to set NEAREST_MIPMAP_NEAREST min filter and NEAREST mag filter to stop the probes filtering linearly or trilinearly.I use GL_LINEAR, since I intentionally want to make use of bilinear sampling.
The idea is, when you downsample 4x2 texels to exactly one pixel, the anisotropic sampling will kick in. This will turn the sampling into actually 2 samples. And the bilinear filter is expected to average all 4 texels around the two sampling points. In the end you get the average of all 8 texels. At least in theory?!

dorbie
03-04-2006, 01:09 AM
I don't think you get anisotropic filtering without MIP mapping, I've done this for some very non MIP mapped textures and set MIN LOD to allow me to do this on a single image on other cards. It may depend on the hardware but I don't see why any card would support this.

I think your overall plan is sound. Try LINEAR_MIPMAP_NEAREST and a LOD clamp to 0. You could just use LINEAR and do a texture coord shift with multipass or multiple taps in a shader.