View Full Version : I find the Secret of openGL Texture.Am i right?

01-14-2001, 09:02 AM
I find many image can not be texture correctly.So i try and try,and find a tip:
as following:
The width or the height of the image must be hte power of 2,such as:1,2,4,8,16,32,64,128,256,512,1024.If >1024,OpenGL1.1 will wrong.

Wish my find will help you,friends.

Chinese believe in the friendship.If all we share the tip we find instead of hiding them in the heart of heart,the road to OpenGL expert will be broad and straight. http://www.opengl.org/discussion_boards/ubb/wink.gif http://www.opengl.org/discussion_boards/ubb/smile.gif

01-14-2001, 09:19 AM
Yes, You are right, they must be power of 2.

I'm sorry to say but... That's written everywhere!

01-14-2001, 10:08 AM
That's written everywhere!

[Tune pattern as the pet detector: ACE Ventura ]R-e-a-l-l-y~~~~~~?!

Well,but..En...that is very..zigzig~~It embarrass me so much http://www.opengl.org/discussion_boards/ubb/smile.gifwhere is the gap in the ground?show me it and i will get into..

OK,Maybe somebody will say:The 1300000000 people will be shamed by your stupid action.;(

01-14-2001, 10:46 AM
Just made a quick search in the OpenGL specification document. There is no place where can find where it says a texture's dimension has to be a power of two. I think this is a driver-limitation, and not something OpenGL requires. I have heard NVidia is planning support for non-power of two sizes, but think that should be clasified as a rumor, i.e. might very well be wrong.

So, theoretically, textures CAN have any size.

01-14-2001, 11:10 AM
Better look again Bob http://www.opengl.org/discussion_boards/ubb/wink.gif, it is written in the spec. I just checked. It clearly states that each texture dimension will be of the form 2^n+2*b, where n is the extent of the source image in a given dimension, and b is the number of texture border pixels. As most people don't use texture borders, that leaves just the image requirement 2^n. This means a 2D texture will be 2^n X 2^m. However I also vaguely remember hearing about arbitrary texture sizes being supported by an extension.

[This message has been edited by DFrey (edited 01-14-2001).]

Michael Steinberg
01-14-2001, 11:22 AM
>1024 won't necesserily end up in an error. The maximum texture size in either direction is driver dependent and you can get it with glGetInteger I think.

01-15-2001, 01:19 AM
DFrey: Ok, whatever. As I said, a quick search. But I read the section about glTexImage2D, where I expected to find it, but couldn't see anything about it.

By the way, in the MSVC6 documentation it says it has to be a power of two. But this documentation is not the very best you can find, and I don't completely trust it as a source of facts. The OpenGL-part that is. I was reading the official 1.2.1 documentation, the .PDF you can download from this site.

And in case you are right (which I believe you are), arbitary texture size as an extension sounds reasonable.

01-15-2001, 04:27 AM
Yes, that's the very pdf file I was reading too. http://www.opengl.org/discussion_boards/ubb/smile.gif Page 118 (pg 130 of pdf file). I pretty much ignore the MSDN OpenGL documentation. It has caused me one too many headaches.

[This message has been edited by DFrey (edited 01-15-2001).]

01-15-2001, 04:35 AM
Code snip, might help ....

/// Insure that texture can be accomodated by the
// hardware accelerator
GLsizei width = m_ImageData->width();
GLsizei height = m_ImageData->height();



if (width==0)
MString msg(
"Texture size is larger than that supported by ",
"hardware accelerator or texture is sized improperly|"
msg << "Texture name was " << name();
throw MOglException(msg,M_TRACEPOINT);

01-15-2001, 04:43 AM
DFrey: Ok, seen it too now http://www.opengl.org/discussion_boards/ubb/biggrin.gif

01-15-2001, 08:17 AM
I don't care what you argue,but the only thing is :If you don't use the size in power of 2,you can not get a right result.

maybe you like talk about the things above,but i think it is a wast of time.Please get more light on the useful tech.ok?

01-15-2001, 08:22 AM
It seems that I started a little discussion here...

Well, I think I'm a little newbie to opengl and I already read that many times... that's why I posted that message...

Anyway, It was NOT my intention to make fun of you, Suvcon.

01-15-2001, 10:24 AM
It is in fact very, very common to be fighting with texture mapping to load your 100x50 texture just to find hours, days, etc, later that they must be power of two in size...

That's some kind of a programmer sindrome: start coding without first reading the documentation (I suffer from this cronically).

Serge K
01-15-2001, 10:50 AM
By the way, in the MSVC6 documentation it says it has to be a power of two.

Uh, really???

from VC6 hepl:
- The width of the texture image. Must be 2n + 2(border) for some integer n.
- The height of the texture image. Must be 2m + 2(border) for some integer m.

You can check it in online MSND lib.:
glTexImage2D (http://msdn.microsoft.com/library/default.asp?URL=/library/psdk/opengl/glfunc03_16jo.htm)

01-15-2001, 11:37 AM
Never mind. I understand what you mean now.

I am a freshhand in opengl too,maybe we can teach each other http://www.opengl.org/discussion_boards/ubb/smile.gif you are welcome!
my mailbox: suvcon_cn@sina.com

01-16-2001, 01:00 AM
This is what my documentation says about glTexImage2D. I looked at the helpfile shipped with MSVC6.

The width of the texture image. Must be 2^n + 2(border) for some integer n.

The height of the texture image. Must be 2^m + 2(border) for some integer m.

But there you can see why I don't trust these documents that very much. Different things about the same thing, on different places, from the same company.

01-16-2001, 07:22 AM

2^n or 2*n...this is a question..but,just try it by coding.You can not use a 100*50 texture directly,but 64*128 do.
so i think it should be 2^n.

practice give us the truth. http://www.opengl.org/discussion_boards/ubb/smile.gif

01-16-2001, 07:53 AM
>>practice give us the truth.

Oh, that is so correct http://www.opengl.org/discussion_boards/ubb/smile.gif

Anyways, wether it's supposed to be 2^n or 2*n is no question. 2^n is definitely the correct one. I have known textures has to be a power of two for a long time (no offense, if you misunderstand it http://www.opengl.org/discussion_boards/ubb/smile.gif ).

But now when you know, I hope you never forget it, which I have done some times, because is can cause some pain in the a$$ and waste of time http://www.opengl.org/discussion_boards/ubb/biggrin.gif

01-17-2001, 09:05 PM
Ok from this power of 2 business im taking it that textures must be

2,4,8,16,32,64,128,256..... and so on

and so on in both directions but not necessarily the same in both x and y.

Ok ive found at least with Nvidia hardware that the texture can be sizes of

2^n + 2^m

giving me texture sizes of

say 6x12 (2+4),(8+4)
or 80x96 (64+16),(64+32)

Im not sure what the deal here is but i was always under the impression that textures did have to be 2^n. Where'as a texture of size 7x13 will fail but as you can see you cant add 2 * 2^n to get those figures no matter how you try.

01-18-2001, 03:42 AM
Ok ive found at least with Nvidia hardware that the texture can be sizes of

2^n + 2^m

giving me texture sizes of

say 6x12 (2+4),(8+4)
or 80x96 (64+16),(64+32)

Are you certain of that? I've never noticed that. That would also seem to require an extension since the OpenGL 1.2 spec clearly indicates the textures dimensions for a 2D texture will be 2^n+2*b X 2^m+2*b. 2*b is just the size added due to border texels. That leaves requiring the source image to be 2^n X 2^m.

01-18-2001, 05:44 AM
Absolutely positive DFrey, im using these texture sizes similar and following those guidelines in my current project. Just to let you know this is _not_ using the AUX functions that do resize the textures to the nearest power of 2. This is not using any extensions but only glTexImage2D(), its quite possible that the driver is converting the texture size but if so it is unknown to me. Maybe someone from nvidia could verify this.

[This message has been edited by dans (edited 01-18-2001).]

01-18-2001, 05:51 AM
If you are not using gluBuild2DMipmaps or gluBuild1DMipmaps to implicitly resize the textures, then this is news to me. How did you learn of this? Did you discover it by trial and error, or did you read of it in some documentation? If it isn't in an official NVIDIA document I'd stay away from this "feature" as it may disappear in a future driver revision.

[This message has been edited by DFrey (edited 01-18-2001).]

01-18-2001, 06:32 AM
Yes it was from trial and error, or more wouldnt it be really cool if i could use this texture size instead. I tried it and it worked. I have seen no documentation stating this as fact.

01-18-2001, 10:23 AM
We absolutely do _not_ support these 2^n+2^m-sized textures. They should produce GL errors, and there is no way that our driver logic could conceivably work with such a texture. Remember that gluBuild2DMipmaps will rescale to a power of 2 automatically, making it look as though any size is supported.

- Matt

01-18-2001, 10:40 AM
Ive gone back and looked at my texture wrapper it seems that the textures ive been creating using the 2^m + 2^n have the gluBuild2DMipmaps flag set. I tested without the flag set and they DO NOT work. Sorry my err.

Michael Steinberg
01-18-2001, 11:30 AM
Somebody mixed up the + with the *, I would guess.

Serge K
01-18-2001, 12:54 PM
How about GL_NV_texture_rectangle?

01-20-2001, 09:32 PM
OK, even though this thread might be dead already, I want to make an assumption http://www.opengl.org/discussion_boards/ubb/wink.gif

For retrieving an arbitrary texel of a given texture with witdth w and heigh h, at coordinate x,y the renderer would have to do

addr = startAddr + y*w + x

if you use 2^n textures, it can be

addr = startAddr + y<<n + x

Now, I'm anything else but sure about hardware implementations, but a in my world a bitshift is still faster than a multiplication http://www.opengl.org/discussion_boards/ubb/wink.gif

Funny thing is, it should be implementation-dependent. Mesa3D for example (and, being a hardcore coder for fun, I mean the software renderers here) builds a table with one longword entry for every y, containing the start address of every row of any texture, when calling glTeximage2D()... so for Mesa it's always one memory access, no matter what dimensions the texture has. I'm not sure why Brian did this, because Mesa still spits out an error when trying to use a texture that doesn't have 2^n dimensions.

But, is it possible that the bitshift assumption is correct?

Serge K
01-22-2001, 12:16 AM
> Now, I'm anything else but sure about hardware implementations, but a in my world a bitshift is still faster than a multiplication

Multiplication is not a big deal.
It's enough to have low precision multiplier : (11bit x 11bit) for a texture upto 2048x2048.
With a tiled internal representation for a textures (for example - 4x4) requirements are even lower - 9x9bit.

The real problem is GL_REPEAT.
For a 2^n texture it's easiest thing. But for other sizes - it's pain...
I guess, GL_NV_texture_rectangle allows only clamping.

01-23-2001, 03:50 AM
>Multiplication is not a big deal. It's
>enough to have low precision multiplier :
>(11bit x 11bit) for a texture upto 2048x2048

As long as you don't need subpixel accuracy, but you're right, texture interpolation would work with single-pixel, too.

>The real problem is GL_REPEAT.
>For a 2^n texture it's easiest thing. But >for other sizes - it's pain...

True, didn't even think of that and overflows... shame on me :P

01-24-2001, 09:15 AM
Okay, so am I to understand that textures cannot be something like 128x256? These are each powers of 2. But do textures have to have the same dimensions or can the x-dimension be different from the y-dimension as long as they both are powers of 2?

Michael Steinberg
01-24-2001, 09:21 AM
They CAN be ANY 2^n*2^m where 2^n and 2^m <= MAX_TEXTURE_SIZE

Serge K
01-26-2001, 09:30 AM
Originally posted by Dodger:
As long as you don't need subpixel accuracy, but you're right, texture interpolation would work with single-pixel, too.

You cannot access texture array with "subpixel accuracy". You have to use some integer values for a row and column. http://www.opengl.org/discussion_boards/ubb/rolleyes.gif
(in case of GL_NEAREST these values = rounded texture coords).

02-07-2001, 01:14 PM
Nehe explains this in his tuts. He also shows how to use gluBuild2DMipmaps to overcome the power of 2 limitation.