View Full Version : correct downscaling of images
04-27-2007, 08:42 AM
I need to correctly downsize an image from size m down to size n.
Unfortunately, the bilinear filtering isnīt going to help me, because in many cases m/n != 2.
Is there a "common" way of doing this on the gpu?
04-27-2007, 09:49 AM
The keyword is MIPMAP I assume.
04-27-2007, 10:09 AM
No, it isn't.
The correct wordening seems to be "downsampling 2d signals" :-)
Imagine you wanted to downsample a 15x11 image down to 3x2. How would you do that quickly and almost correct on the GPU?
04-27-2007, 10:51 AM
Bilinear filter (linear interpolation) is also one of resampling methods. It may not be accurate for up-sampling, but I think it is enough for down sampling.
The ideal sampling function in time domain is Sinc function, sin(t)/t. You may try non-linear interpolations, such as Cubic interpolation, Lanczos, etc.
Try to search "Windowed Sinc Interpolation" by
Gernot Hoffmann. You may found some code in the PDF file.
04-29-2007, 03:49 AM
A really nifty trick that I used in a GL-based image viewer is to use aniso filtering to generate downfiltered versions of the image. It's not as "nice" as using a sinc function, but then again sinc has issues with ringing due to the fact that you have to truncate to a finite basis. In practice it works well, even on text. Just set up an FBO of size Nx by My, turn on max anisotropy and render from the source image into the FBO, then setup your target Nx by Ny FBO and render from the intermediate FBO into the final target. For better performance, filter the largest dimension first. If you're going to be reducing an image by more than GL_TEXTURE_MAX_ANISOTROPY, you may want to get fancy and do the above in multiple steps, but in practice for things like photo thumbnails I haven't seen a need to do so on hardware which supports reasonably good aniso filtering (i.e. 16:1) Since each aniso filter sample is actually a bilerp, the real support for 16:1 is actually 32 pixels, so you can get away with 2*max_aniso in one step.
05-02-2007, 07:11 AM
I actually tried anisotropic decimation like described here: Nvidia\'s Anisotropic Decimation whitepaper (http://http.download.nvidia.com/developer/SDK/Individual_Samples/DEMOS/Direct3D9/src/AnisoDecimation/docs/AnisoDecimation.pdf)
But I do not trust it to work correctly. I tried it once for calculating the average scene luminance. But it gave me computation errors (when comparing the result with calculating the luminance on the cpu). Instead, I stuck to "usual" bilinear downsampling where I decimate the image by half in each step (applying a correction for sizes that are not even). It is very reliable.
Unfortunately,this method only works if you want to scale all the way down to 1x1 and the intermediate images don't matter.
I've found a doc which describes a similar problem to mine:
NPOT mipmap creation (http://download.nvidia.com/developer/Papers/2005/NP2_Mipmapping/NP2_Mipmap_Creation.pdf)
But I guess, my task is even more general :-)
05-02-2007, 10:20 AM
Imagine you wanted to downsample a 15x11 image down to 3x2. How would you do that quickly and almost correct on the GPU?In the general case? Don't bother.
The CPU can handle it just as well, and without the round trip of: Upload image, do render, download image. Plus you'll have more control over it. You can implement scaling features that a GPU can't handle, and you don't have to deal with GPU-defined limits.
05-02-2007, 11:02 AM
I really want to do it on the GPU. I need it to do blooming in a HDR pipeline. Whenever you read something about it, it is suggested to "use a downscaled version of the framebuffer, apply some brightness filter and then blur that".
(Similar applies to determining the scene luminance). But naive downscaling the framebuffer to 1/8 introduces really ugly artifacts. Suddenly sharp highlights start to flicker and the bloom around them goes on and off. If you use naive downscaling for the scene luminance, suddenly some screen regions become "more important" than others. If you are really unlucky, you loose half of your screen for the average scene luminance. So, whenever someone suggest to "just use a downscaled version of the framebuffer", get suspicious ;-)
I donīt need a 100% perfect downsampler (which doesnīt exist anyway), but something which is not quite as bad as a bilinear filter :)
05-16-2007, 07:11 AM
Actually,for blooming it is suggested that you use a combination of the downscaled-blurred texture and the original,based on the alpha channel.That way you get light "spilling" over neighboring pixels..So in theory,precision is not that necessary.However,since I haven't implemented this myself,I can't really give you an honest evaluation of the technique.
Powered by vBulletin® Version 4.2.3 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.