PDA

View Full Version : Resident Textures Not Working



tomjscott
06-24-2004, 12:08 PM
I am having problems getting textures to reside in video memory. I've reduced my code down to simply creating a texture with no data that is 256x256 RGBA and it still does not query as being resident when I prioritize it. Here is my code:


GLuint mTexID;
glGenTextures(1, &mTexID);

int mWidth = 256;
int mHeight = 256;
int mDepth = 4;

unsigned char* data = (unsigned char*)malloc(mWidth * mHeight * mDepth);
memset(data, 0, mWidth * mHeight * mDepth);

glBindTexture(GL_TEXTURE_2D, mTexID);

glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

// trying to set priority here even though I re-prioritize later. seems to have
// no effect either way.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_PRIORITY, 1);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA,
GL_UNSIGNED_BYTE, data);


// prioritize the texture
GLclampf* priorities = new GLclampf[1];
for (int i = 0; i < 1; ++i)
{
priorities[i] = 1.0f;
}
glPrioritizeTextures(1, &amp;mTexID, priorities);

GLboolean* flags = new GLboolean[1];
bool bAllResident = glAreTexturesResident(1, &amp;mTexID, flags);
if (bAllResident)
{
cout << "is resident" << endl;
}
else
{
cout << "not resident" << endl;
}

delete []priorities;
delete []flags;
delete []data;I am running OpenGL on a Linux machine using the 5328 driver with a Geforce2 MX card having 64MB RAM.

dorbie
06-24-2004, 06:56 PM
glEnable texture and try drawing a polygon with the texture on it then test if it's resident.

tomjscott
06-25-2004, 07:41 AM
Thanks. That worked. However, do you know of any way to force it to be resident before I actually render it? I can easily render a polygon each time I bind a new texture, but I'd rather not have to do that.

tomjscott
06-25-2004, 09:50 AM
I think I spoke too soon. Although the glAreTexturesResident function is now returning true, the system memory is still being eaten up. I modified my example to create 128 textures at 256x256x4 and the function returns that they are all resident. However, the system memory still drops by approcimately 32MB.

Any ideas?

tomjscott
06-25-2004, 11:46 AM
I read a post by someone on flipcode where the person stated that the driver may keep a backup copy of the texture in system ram even if it has been made resident. Is this true? If so, is there any way to get around this?

dorbie
06-25-2004, 01:30 PM
Yes that is true. It is faster because if you oversubscribe graphics memory it doesn't have to copy back to system memory just from system memory and delete the resource on the card.

I don't think you have any control over driver internals like that. Just make sure you delete your own system memory copy, that's about all yuo can do. Image buffer objects may give you more control over this in future (I'm not sure).

tomjscott
06-25-2004, 02:06 PM
An nVidia person posted this in response to my same question:

"Yes, this is how it typically works, so the driver can efficiently swap textures from system->video in the case where video ram is insufficient. I don't think there is a way to avoid this behavior. The theory, at least in Windows, is that the virtual memory manager will swap unneeded system memory copies to disk anyway."

It looks like there is no way to free that system memory. I am careful to always free the original image data, but the new texture object is still stuck in system ram.