resizing a texture?

If I want to resize a texture data from 512512 to 256256 (for example), how should I manipulate the data array. I know I should create another data array size of 256256bpp, but how to move the data properly from the bigger array to the smaller one, I you know what I mean?

There’s really no correct way of doing it. Resampling, especially downwampling, is actually a very difficult subject, and requires some knowledge in signal processing.

Basically, what it’s all about, is that you should keep the frequencies in the image below a certain limit. When you have your large image, it consists of a wide range of frequencies. When you reduce the sample rate (thats what you do when downsampling the image, less pixels per edge = less samples per edge) you must also remove the highest frequencies, or you will have an aliasing effect when the hier frequencies folds in the gfrequency spectrum and reappear as lower frequencies. If you have some basic knowledge about signal processing I’m sure you have heard about the Nyquist theorem. If you haven’t, don’t worry about it.

To remove the highest frequencies, you need to apply a low-pass filter to the image. Choosing the right filter and performing the actual filtering is an art.

If all you’re after is an easy way to downsample an image by one half, you can get away with a simple 2x2 filter kerner. What it mean is that you take a block of 2x2 pixels, calculate the average color, and put that color in the corresponding pixel in the new image. An example.

A   B   C   D   E

±–±–±–±–±–±-
1 | | | | | |
±–±–±–±–±–±-
2 | | | | | |
±–±–±–±–±–±-
3 | | | | | | old image
±–±–±–±–±–±-
4 | | | | | |
±–±–±–±–±–±-
| | | | | |

A   B   

±–±–±
1 | | |
±–±–± new image
2 | | |
±–±–±
| | |

To calculate the pixel A1 in the new image, calculate the average color of the corresponding pixels in the old image, A1, A2, B1 and B1. For B2 in the new image, calculate the average of C3, C4, D3 and D4 in the old image.

The last post was right, this is the simplest approach. Look up mipmap somewhere, for an algorithm listing, and maybe even code.

gluScaleImage(…)

Oh, there’s gluScaleImage, it seems to work well, but it was good to know about scaling techiques. Thanks.

Have a look at the ‘Titan’-Engine.
(http://talika.fie.us.es/~titan/titan/rnews.html)
They have some good scale code…

You might also get cheezy and not worry about aliasing (if it doesn’t bug you too much), in which case you can just plainly take each second pixel of your image and put it to a new image, like:

unsigned long oldImage[512*512],
               newImage[256*256],
               neighbour[9];

for (unsigned long i = 0; i < 256*256; i++) {
    newImage[i] = oldImage[i << 1];
}

Or you might do some linear filtering like:

unsigned long oldImage[512][512],
              newImage[256][256], /* The compiled code stores these as flat arrays, so these pointers can be passed directly to glTexImage2D */
              filterAvg,
              filterTemp;

/* Assumed data is RGBA format, Red is first 8 bytes of the unsigned long, Green the next 8 bytes, Blue the next 8 bytes, and Alpha the last 8 */
for (int i = 0; i < 256; i++)
    for (int j = 0; j < 256; j++) {
        memset(neighbour, 0, sizeof(unsigned long) * 9);
        // upper left neighbour
        neighbour[0] = (2*i - 1 < 0) ? 0 : ((2*j - 1 < 0) ? 0 : oldImage[2*i - 1][2*j - 1]);
        // upper neighbour
        neighbour[1] = (2*i - 1 < 0) ? 0 : oldImage[2*i - 1][2*j];
        // upper right neighbour
        neighbour[2] = (2*i - 1 < 0) ? 0 : ((2*j + 1 > 511) ? 0 : oldImage[2*i - 1][2*j + 1]);
        // left neighbour
        neighbour[3] = (2*j - 1 < 0) ? 0 : oldImage[2*i][2*j - 1];
        // itself
        neighbour[4] = oldImage[2*i][2*j];
        // right neighbour
        neighbour[5] = (2*j + 1 > 511) ? 0 : oldImage[2*i][2*j + 1];
        // lower left neighbour
        neighbour[6] = (2*i + 1 > 511) ? 0 : ((2*j - 1 < 0) ? 0 : oldImage[2*i + 1][2*j - 1]);
        // lower neighbour
        neighbour[7] = (2*i + 1 > 511) ? 0 : oldImage[2*i + 1][2*j];
        // lower right neighbour
        neighbour[8] = (2*i + 1 > 511) ? 0 : ((2*j + 1 > 511) ? 0 : oldImage[2*i + 1][2*j + 1]);
        filterAvg = 0;
        // Calculate R, G, and B averages separately
        for (int k = 0; k < 9; k++) {
            filterTemp = 0;
            // Red
            filterTemp |= ((filterAvg & 0xFF) + (neighbour[k] & 0xFF) / 9) & 0xFF;
            // Green
            filterTemp |= ((filterAvg & 0xFF00) + (neighbour[k] & 0xFF00) / 9) & 0xFF00;
            // Blue
            filterTemp |= ((filterAvg & 0xFF0000) + (neighbour[k] & 0xFF0000) / 9) & 0xFF0000;
            // Alpha
            filterTemp |= ((filterAvg & 0xFF000000) + (neighbour[k] & 0xFF000000) / 9) & 0xFF000000;
            filterAvg += filterTemp; // No carries will occur, because no byte will exceed 255
        }
        newImage[i][j] = filterAvg;
    }

But you can see how expensive that is, even for a relatively simple filter like this, using no floating point arithmetic.

[This message has been edited by Pa3PyX (edited 09-14-2002).]