Correct me if I’m wrong…

But there is no texture count maximum, right? My app is trying to load over 700 textures (all very small, total for all is 4.5M), I’m testing the calls to glTexImage2D with GL_PROXY_TEXTURE_2D, and all of the call pass, however a lot of the textures are coming up blank (white). I verified my texture files and my scene data. I did a search on the advanced forum and found nothing. If this were a real problem I’m sure someone would have complained by now, so I just want to verify that I am after all crazy…

Thanks…

John.

Right, textures are paged on and off the board as needed. You can get an estimate of how much memory the board has with a texture proxy though.

See “Testing Whether Textures Fit: The Texture Proxy Mechanism” … http://www.opengl.org/developers/documentation/OGL_userguide/OpenGLonWin-13.html#MARKER-9-10

You can get a reasonable estimate of the likelihood that a texture is resident using glTexParameteri(GL_TEXTURE_PRIORITY)

Thats good, there must be something flaky somewhere in my code… I’d much rather it be my code (that I can change) than opengl (work arounds )…

Ya, proxy works great, I haven’t had a problem with it yet.

Thanks for the second opinion, I just needed to hear someone else verify it (I still may be a little crazy )…

John.

I know this may sound obvious, but when you say very small textures, they are all power of 2 dimensions right?

Right. Check that all texture dimensions are power of two. Check your minimum and maximum allowed texture sizes (base gl only requires 64x64 till 256x256 to be supported) with glGet*.
You can resize textures to valid dimensions with glu.

And also, make sure that your mipmap sets are complete (again, glu can do that, otherwise explicitly specify GL_LINEAR or GL_NEAREST as minification filters).

What’s your hardware? I’ve had problems with running out of resources and getting silent failures from GL a LOT on Intel Built-in Graphics solutions.

base gl only requires 64x64 till 256x256 to be supported

First, there’s no lower limit on the size of a texture. An OpenGL implementation must support textures all the way down to 1x1.
Second, the smallest requires size to be supported is 64x64, not 256x256.

Yep it was an app bug (there’s a surprise )… The problem showed itself while viewing levels, so I debugged the level textures (these were fine). I also have some system textures (black, white,…), the black texture was failing to load from the disk and never made it to opengl.

Nvidia cards do some strange things when you bind to an invalid texture. I thought the app would throw an exception or use the last valid texture. Instead I was getting a purple color from the texture unit.

Oh well works now…

John.

Originally posted by john_at_kbs_is:
Nvidia cards do some strange things when you bind to an invalid texture. I thought the app would throw an exception or use the last valid texture. Instead I was getting a purple color from the texture unit.

OpenGL specified that an invalid texture returns (1.0,1.0,1.0,1.0), white.

No, an invalid texture is effectively disabled.

  • Matt

Strange, I too thought this was in the spec and an incorrectly created texture produces a white fragment. I suppose with modulate the result is the same but with replace it would make a significant difference.

[This message has been edited by dorbie (edited 09-15-2002).]

Originally posted by Bob:
[b] [quote]
base gl only requires 64x64 till 256x256 to be supported

First, there’s no lower limit on the size of a texture. An OpenGL implementation must support textures all the way down to 1x1.
Second, the smallest requires size to be supported is 64x64, not 256x256.[/b][/QUOTE]I can’t find it anymore in the current specs, so I guess you’re right.

Anyway, that’s what NeHe taught me back in the day (applying only to non-mipmapped textures).

Hmm, had to look it up, and yup, turns out I was wrong. Invalid textures are effectively disabled. Not sure where I got the white fragment from.
But then again, I seldom use invalid textures

Well, out of interest, how many of you guys actually check the return code from basic gl calls? In my debug build, I have something like:

int Result = glDoSomethingNice ();
#ifdef ( _DEBUG )
assert ( Result == GL_OK );
#endif

Anyone else bother with this kind of thing? Or do you only use it when you absolutely need to know

what return code?

void glTexImage2D( GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid *pixels )

Maybe he meant something like

glDoSomethingNice ();
#ifdef ( _DEBUG )
assert ( glGetError() == GL_OK );
#endif

ahhhh…

sorry, never tried it…

This glGetError() error handling is really quite a mess. Hunting the instruction that could have caused an “Invalid value” can be pretty tedious (if you don’t include #ifdef _DEBUG … glGetError … #endif after each GL command). Some more verbose debug libs could help me quite a lot. Does anyone have good debugging tips?

Define a macro to do your error checking, and make that macro expand into something else (like empty) in release mode builds.

Put this macro after each block of GL calls, and at the beginning/end of each function that actually issues a GL call; that should be sufficient to find problems very quickly.

Note that if you make sure that your program immediately stops on an error in debug mode, this also forces you to always stay error free :slight_smile: