Dinamic gaussian kernel

Hi, I’m trying to implement a shader that simulates a translucent window: everything behind it is blurried. I got some nice results, but I would like to implement a gaussian blur which size depends on the distance of the object.
Up to now I use if() statements to decide whether or not to add more blur, but I don’t like this way.

Have you a better algorithm (yet fast, if possible) to parametrize the gaussian blur intensity?
Thank you!

cignox1,
I have a code to create a 1-D Gaussian kernel with given sigma value (standard deviation).

///////////////////////////////////////////////////////////////////////////////
// generate 1D Gaussian kernel
// kernel size should be odd number (3, 5, 7, 9, ...)
///////////////////////////////////////////////////////////////////////////////
void makeGaussianKernel(float sigma, float *kernel, int kernelSize)
{
    //const double PI = 3.14159265;       // PI

    int i, center;
    float sum = 0;                      // used for normalization
    double result = 0;                  // result of gaussian func

    // compute kernel elements normal distribution equation(Gaussian)
    // do only half(positive area) and mirror to negative side
    // because Gaussian is even function, symmetric to Y-axis.
    center = kernelSize / 2;   // center value of n-array(0 ~ n-1)

    if(sigma == 0)
    {
        for(i = 0; i <= center; ++i)
            kernel[center+i] = kernel[center-i] = 0;

        kernel[center] = 1.0;
    }
    else
    {
        for(i = 0; i <= center; ++i)
        {
            //result = exp(-(i*i)/(double)(2*sigma*sigma)) / (sqrt(2*PI)*sigma);
            // NOTE: dividing (sqrt(2*PI)*sigma) is not needed because normalizing result later
            result = exp(-(i*i)/(double)(2*sigma*sigma));
            kernel[center+i] = kernel[center-i] = (float)result;
            sum += (float)result;
            if(i != 0) sum += (float)result;
        }

        // normalize kernel
        // make sum of all elements in kernel to 1
        for(i = 0; i <= center; ++i)
            kernel[center+i] = kernel[center-i] /= sum;
    }

    // DEBUG //
#if 0
    cout << "1-D Gaussian Kernel
";
    cout << "===================
";
    sum = 0;
    for(i = 0; i < kernelSize; ++i)
    {
        cout << i << ": " << kernel[i] << endl;
        sum += kernel[i];
    }
    cout << "Kernel Sum: " << sum << endl;
#endif
}

If sigma is increases, the image gets more blurry. And the kernel size would also grow. To determine the kernel size, I use the following formula:

// determine size of kernel (odd #)
// 0.0 <= sigma < 0.5 : 3
// 0.5 <= sigma < 1.0 : 5
// 1.0 <= sigma < 1.5 : 7
// 1.5 <= sigma < 2.0 : 9
// 2.0 <= sigma < 2.5 : 11
// 2.5 <= sigma < 3.0 : 13 ...
kernelSize = 2 * int(2*sigma) + 3;

Hope it helps.
==song==

Doing variable-width blurs is difficult on current graphics hardware.

This example does it by storing an atlas of different sized kernels in a texture. This works, but you always pay the cost of the largest kernel:
http://developer.nvidia.com/object/convolution_filters.html

This paper describes a much cleverer method, but be warned it’s not simple!
http://graphics.pixar.com/DepthOfField/paper.pdf

Thank you both! Yes, currently I use up to 4x4 filters, and it can be really slow, but it would be used only for small windows, so it shouln’t be a problem :slight_smile:

cignox1,
Should the kernel size be odd number, for example 5x5?

And, you may increase performace by using seperable convolution, because Gaussian kernel is separable. Multiplications can be down from M*N to M+N.

Please check the link for separable convolution. There is an example to compare 2D and seperable convolution performance with Gaussian filter, too:
www.songho.ca/dsp/convoluton.html

Actually, I don’t require a true gaussian blur… just something that blurs the pixel using a parameter that specifies the amount of blur desidered…

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.