um…
I think the size of textured image should be:
width — The width of the texture image. Must be 2^n + 2(border) for some integer n.
height — The height of the texture image. Must be 2^m + 2(border) for some integer m.
Taking your code for example where you want to display a image of size 384144, the size of texture buffer should be 512256. That is,
in order to texture-map the image, you should allocate a buffer of size 512*256 and put the image in it. Obviously, your image will not cover the whole buffer because it is much smaller than it. So, you must use ‘glTexCoord’ to hide the part of buffer other than your image.
Originally posted by outRider:
[b]The problem I have is that I’m trying to use a bitmap I’ve created through some GDI functions as a texture, but all I get is a white quad…
When I use glDrwPixels as such:
glDrawPixels(TextureWidth, TextureHeight, GL_RGB, GL_UNSIGNED_BYTE, pTexture);
it works fine, but using glTexImage2D to get a texture object like so:
Thanks for the info. The size of the bitmap is relative to the point size of the font I’m drawing on it, so I dont decide the dimentions before hand. How would I round up to the nearest acceptable texture size?
Take the base 2 log (or quick approximation by repeated right shifting), and take the whole part of the result and raise 2 by that power. Then check to see if the value is the same as it was previously. If not, multiply it by 2. Do that for each dimension to find closest powers of 2.
Oh wait I just noticed something about that old code of mine, it finds the closest power of two that is less than, equal or greater than the input. You just want greater so it should be as simple as: