PDA

View Full Version : Texture and memory questions



glZzz
04-17-2010, 08:02 AM
hi all, i got a problem here about taking place 3D texture and release 3D texture.
The 1st is why after 3D large data set was uploaded into memory by glTexSubImage3D, the return value was 1285(out of memory) by glGetError, however the current video memory and os memory still remain enough space.
The 2nd is why the copy memory coming from 3D texture won't be released when glDeleteTextures is invoked? How can I get back this memory. Is there any problem with this kind of API or anything else?
So, can anyone help me explain OpenGL mechanism between texture and memory.

Any answers will be appreciated, thanks all.

Alfonse Reinheart
04-17-2010, 11:16 AM
however the current video memory and os memory still remain enough space.

What are you using to determine this?


The 2nd is why the copy memory coming from 3D texture won't be released when glDeleteTextures is invoked?

The driver will release memory when it feels that it should.

glZzz
04-18-2010, 05:52 PM
however the current video memory and os memory still remain enough space.

What are you using to determine this?


The 2nd is why the copy memory coming from 3D texture won't be released when glDeleteTextures is invoked?

The driver will release memory when it feels that it should.

The aid tool name is RivaTuner which can monitor video memory & os memory, and other hardwares.

Dark Photon
04-19-2010, 05:07 AM
hi all, i got a problem here about taking place 3D texture and release 3D texture.
The 1st is why after 3D large data set was uploaded into memory by glTexSubImage3D, the return value was 1285(out of memory) by glGetError, however the current video memory and os memory still remain enough space.
GPU memory is used for other purposes besides texture. Namely the display system, window framebuffers for your application and others, off-screen render targets, etc. It's possible enough is already consumed to throw off your estimates as to what memory is actually free.

Perhaps others could help you if you described specifically your computations and method for determining the amount of memory you think the 3D texture will consume, and the amount of memory you think is free.

Also keep in mind that both a GPU and CPU copy of the data is maintained. So the failure to get sufficient memory could be a GPU-side thing or a CPU-side thing. You might also be exceeding the supported dimensions for a 3D texture on your GPU/driver.


The 2nd is why the copy memory coming from 3D texture won't be released when glDeleteTextures is invoked? How can I get back this memory. Is there any problem with this kind of API or anything else?
How do you know that this memory isn't being released?

If you're talking CPU memory, what is possibly happening is that when you do the initial allocation, the memory is obtained from the OS and assigned to the application, added to the application heap, and then the application (driver) allocates that memory to maintain the texture data. When the texture is freed, the memory is merely put back on the application heap, but that total heap size is not "shrunk" by giving the memory back to the operating system.

...but this is all just a guess because you haven't given us any details on your data gathering method or your calculations, for GPU or CPU base available space and space consumption.

For GPU memory monitoring, one good bet is to use OpenGL extensions such as NVX_gpu_memory_info and ATI_meminfo to probe and see what's going on and how much memory things actually take. Using this, you might find a fault in your space estimate heuristics.

Alfonse Reinheart
04-19-2010, 10:42 AM
The aid tool name is RivaTuner which can monitor video memory & os memory, and other hardwares.

If you allocate another texture of the same size as the previous one, does the client memory size increase?

glZzz
04-19-2010, 08:22 PM
Thanks a lot, friends. I have found the reason about question 2, that the texture data format of glTexSubImage3D is not the same as glTexImage3D invoked.
About question 1, I just follow the order, that is from creating texture, to taking place texture, to deleting texture, seveal times later I can't create texture or take place it successfully. Im realy confused.

Dark Photon
04-20-2010, 05:47 AM
About question 1, I just follow the order, that is from creating texture, to taking place texture, to deleting texture, seveal times later I can't create texture or take place it successfully. Im realy confused.

Post a short test program that illustrates the problem, so folks can review it, try it, and provide feedback/advise.

glZzz
04-20-2010, 06:40 PM
The following is my partail code concern texture:

Create texture:


GLenum err = GL_OUT_OF_MEMORY;
m_iDivident = 0;
glGenTextures(1, &m_iTex3D);
glBindTexture(GL_TEXTURE_3D, m_iTex3D);
while (err == GL_OUT_OF_MEMORY)
{
m_iDivident++;
div_t div_layer = div(m_pData->numSlices[2], m_iDivident);
m_pData->volTexSizes[2] = div_layer.quot + div_layer.rem;
glTexImage3D(GL_TEXTURE_3D, 0, GL_COMPRESSED_ALPHA_ARB, m_pData->volTexSizes[0], m_pData->volTexSizes[1],
m_pData->volTexSizes[2], 0, GL_ALPHA, GL_UNSIGNED_BYTE, NULL);
err = glGetError();
}
glTexEnvi(GL_TEXTURE_3D, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

Take place texture:


unsigned char *pchData = NULL;
int di, idz, idy, idx, iNum;
short nCT = 0;
iNum = 1;
div_t divPiece;
while (!pchData)
{
iNum++;
divPiece = div(m_pData->volTexSizes[2], iNum);
pchData = (unsigned char*)calloc(m_pData->volTexSizes[0] *
m_pData->volTexSizes[1] *
divPiece.quot+divPiece.rem), sizeof(unsigned char));
}

nCT = 0;

float fCon = 0.0625f;
NIL::CCubeCT* pCube = m_pData->volData;
GLenum err;
glBindTexture(GL_TEXTURE_3D, m_iTex3D);
for (int i = 0; i < iNum; i++)
{
di = 0;
int iLayerBegin = i*(divPiece.quot)*m_iDivident;
int iLayerEnd = (i+1)*(divPiece.quot*m_iDivident+divPiece.rem);
if (i == iNum-1)
{
iLayerEnd = m_pData->numSlices[2];
}
for (idz = iLayerBegin; idz < iLayerEnd; idz+=m_iDivident) {
for (idy = 0; idy < m_pData->numSlices[1]; ++idy) {
for (idx = 0; idx < m_pData->numSlices[0]; ++idx) {
nCT = pCube->GetFast(idx, idy, idz);
pchData[di] = (unsigned char)(nCT * fCon);
di++;
}
}
}
glTexSubImage3D(GL_TEXTURE_3D, 0, 0, 0, i*divPiece.quot, m_pData->volTexSizes[0], m_pData->volTexSizes[1],
divPiece.quot+divPiece.rem, GL_ALPHA, GL_UNSIGNED_BYTE, pchData);
err = glGetError();
}
free(pchData);

Delete texture:


if (m_iTex3D)
{
glDeleteTextures(1, &amp;m_iTex3D);
m_iTex3D = 0;
}

Dark Photon
04-21-2010, 06:00 AM
No, I mean "really". Post a short test "program" that illustrates the problem, so folks can review it and try it (without having to waste the time cooking one themselves).

Here's a template to pop your code into and verify that its still broken. It could be very well be something else in your code that's causing the problem, and cooking a short test program eliminates that possibility.


#include <stdio.h>
#include <stdlib.h>
#define GL_GLEXT_PROTOTYPES 1
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>

void checkGLErrors( const char *s )
{
while ( 1 )
{
int x = glGetError() ;

if ( x == GL_NO_ERROR )
return;

fprintf( stderr, "%s: OpenGL error: %s",
s ? s : "", gluErrorString ( x ) ) ;
}
}


void keybd ( unsigned char, int, int )
{
exit ( 0 ) ;
}


void reshape(int wid, int ht)
{
glViewport(0, 0, wid, ht);
}

void showGLerror ()
{
GLenum err ;

while ( (err = glGetError()) != GL_NO_ERROR )
fprintf ( stderr, "OpenGL Error: %s\n", gluErrorString ( err ) ) ;
}


void display ( void )
{
static float a = 0.0f ;

a += 0.3f ;

glMatrixMode ( GL_PROJECTION ) ;
glLoadIdentity () ;
glFrustum ( -1.0f, 1.0f,
-1.0f / (640.0f/480.0f), 1.0f / (640.0f/480.0f),
3.0f, 10.0f) ;

glMatrixMode ( GL_MODELVIEW ) ;
glLoadIdentity () ;
glTranslatef ( 0.0, 0.0, -5.0 ) ;
glRotatef ( a, 0.2, 0.7, 0 ) ;

glEnable ( GL_DEPTH_TEST ) ;
glEnable ( GL_CULL_FACE ) ;
glCullFace ( GL_FRONT ) ;

glClearColor ( 0.0f, 0.0f, 0.0f, 1.0f ) ;
glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) ;

glutSolidTeapot ( 1.0f ) ;

glutSwapBuffers () ;
glutPostRedisplay () ;

checkGLErrors ( "display" ) ;
}


int main ( int argc, char **argv )
{
// Init GL context
glutInit ( &amp;argc, argv ) ;
glutInitDisplayMode ( GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE ) ;
glutInitWindowSize ( 500, 500 ) ;
glutCreateWindow ( "GL Test" ) ;
glutDisplayFunc ( display ) ;
glutKeyboardFunc ( keybd ) ;
glutReshapeFunc ( reshape ) ;

// Put create/setup code here

checkGLErrors ( "end of setup" ) ;

// Draw with shader
glutMainLoop () ;
return 0 ;
}

I think if you do this, and sprinkle some checkGLError() calls around, you'll find that this:


glTexEnvi(GL_TEXTURE_3D, GL_TEXTURE_ENV_MODE, GL_REPLACE);

trips an invalid enumerant. The first argument to glTexEnvi must be GL_TEXTURE_ENV.

glZzz
04-22-2010, 06:08 PM
No, I mean "really". Post a short test "program" that illustrates the problem, so folks can review it and try it (without having to waste the time cooking one themselves).

Here's a template to pop your code into and verify that its still broken. It could be very well be something else in your code that's causing the problem, and cooking a short test program eliminates that possibility.

I think if you do this, and sprinkle some checkGLError() calls around, you'll find that this:


glTexEnvi(GL_TEXTURE_3D, GL_TEXTURE_ENV_MODE, GL_REPLACE);

trips an invalid enumerant. The first argument to glTexEnvi must be GL_TEXTURE_ENV.


Thanks for your advice, however I have already debugged the program by glGetError on necessary place. There is no error return.

glZzz
04-22-2010, 07:03 PM
I forgot to give my large data details.
Everything is ok when I upload normal data which is around 512*512*512 byte. In contrast, the program got errors after I uploaded large data which is 512*512*1624 byte.

Alfonse Reinheart
04-22-2010, 08:56 PM
Everything is ok when I upload normal data which is around 512*512*512 byte. In contrast, the program got errors after I uploaded large data which is 512*512*1624 byte.

Of course you got an error. GL_MAX_3D_TEXTURE_SIZE is 512 for most hardware. So you can't have a 512x512x1624 texture.

Also, 512*512*1624 is 406 MegaTexels. Even as a 16-bit texture, this is over 800 megatexels. 24-bit or 32-bit blows past 1GB just for this one texture alone. One of the reasons why most cards cap 3D texture sizes to 512 is due to the sheer quantity of space that such textures take.

Ashenwraith
04-26-2010, 04:25 PM
Yeah, that's crazy to try 512*512*1624.

That's basically 1,624 512x512 textures all mapped to one mega texture.

I don't know what you are trying to do, but similarly to working with mega textures you have to figure how to break it up for selective use (at least on current hardware).

glZzz
04-26-2010, 05:47 PM
Everything is ok when I upload normal data which is around 512*512*512 byte. In contrast, the program got errors after I uploaded large data which is 512*512*1624 byte.

Of course you got an error. GL_MAX_3D_TEXTURE_SIZE is 512 for most hardware. So you can't have a 512x512x1624 texture.

Also, 512*512*1624 is 406 MegaTexels. Even as a 16-bit texture, this is over 800 megatexels. 24-bit or 32-bit blows past 1GB just for this one texture alone. One of the reasons why most cards cap 3D texture sizes to 512 is due to the sheer quantity of space that such textures take.

My video card is 9800 GTX, and I have gotten the max 3D texture size that is 2048.

glZzz
04-26-2010, 06:09 PM
Yeah, that's crazy to try 512*512*1624.

That's basically 1,624 512x512 textures all mapped to one mega texture.

I don't know what you are trying to do, but similarly to working with mega textures you have to figure how to break it up for selective use (at least on current hardware).

I am tring volume rendering on huge data. Sometimes the texture data was selected accroding to interlayer, but it will make the image quality reduce. How did u break up the texture to use?

Alfonse Reinheart
04-26-2010, 06:25 PM
My video card is 9800 GTX, and I have gotten the max 3D texture size that is 2048.

And what about the memory size? Just because you can allocate a texture with dimensions that large doesn't mean that memory limitations will permit it. As I pointed out, even at 16bpp, that's an 800MB texture. 24bpp is 1.2GB, which is probably more than a 9800GTX has space for.

glZzz
04-26-2010, 07:26 PM
My video card is 9800 GTX, and I have gotten the max 3D texture size that is 2048.

And what about the memory size? Just because you can allocate a texture with dimensions that large doesn't mean that memory limitations will permit it. As I pointed out, even at 16bpp, that's an 800MB texture. 24bpp is 1.2GB, which is probably more than a 9800GTX has space for.

My video memory size is 512MB. As I mentioned above, per pixel is stored by the type of unsigned byte, so the total texture size is around 406MB.

Ashenwraith
04-26-2010, 08:13 PM
Well, does 512*512*2048 work if you have enough memory?

What about 512*512*1024?

Maybe your card requires it to be pow sized.

glZzz
04-26-2010, 10:00 PM
Well, does 512*512*2048 work if you have enough memory?

What about 512*512*1024?

Maybe your card requires it to be pow sized.

No, it won't work when 2 sets of data you mentioned are uploaded into texture, but such as 512*512*362, 512*512*461 and 512*512*520 are OK!

So, I suspect whether the limitation of current application thread, that only can access the maximum 2GB memory, affects OpenGL operation or not?

Alfonse Reinheart
04-26-2010, 10:12 PM
So, I suspect whether the limitation of current application thread, that only can access the maximum 2GB memory, affects OpenGL operation or not?

Threads all share memory; it is the process that is limited to 2GB (on 32-bit machines).

And yes, this affects OpenGL. However, what more than likely also affects OpenGL is trying to allocate 4/5ths of the GPU's total memory in one texture.

Just because something theoretically fits doesn't mean it actually fits.

Ashenwraith
04-26-2010, 10:36 PM
That's a good point.

Does OpenGL even have 64bit support?

I thought that was something coming out in OGL 4?

Dark Photon
04-27-2010, 06:23 AM
Threads all share memory; it is the process that is limited to 2GB (on 32-bit machines).
And just to clarify, this is a Windows CPU virtual memory (VM) limitation. 32-bit Windows by default splits VM 2GB/2GB user/kernel, so you get at most 2GB of VM to play with in your app (note this is virtual memory, not physical memory).

On Linux, process virtual memory is typically split 3GB/1GB user/kernel on a 32-bit machine, so you get 3GB tops by default per process. Though you can tweak this (e.g. 3.5GB/0.5GB).

This VM address space limit on 32-bit OSs/CPUs is an issue regardless of how much physical memory you have installed on the box.

Incidentally, this annoying limitation is what often pushes folks onto a 64-bit OS/CPU. All of this CPU VM limit nonsense just goes away. On a 64-bit box, your maximum VM space bumps up to over 1TB, well over the amount of physical memory folks have installed nowadays.


So, I suspect whether the limitation of current application thread, that only can access the maximum 2GB memory, affects OpenGL operation or not?
Yes, because OpenGL keeps a copy of your texture data. And lots of textures (or a few HUGE textures) can greatly increase the virtual memory address consumption of your process, along with other data, library data/code, program address space, etc.

If you're on Linux, run "top" to check out your process and see how much virtual memory it is consuming. Make sure it doesn't get anywhere near 3GB on 32-bit Linux. On Windows, run whatever the equivalent tool is (???) and make sure you app doesn't ever come close to 2GB. If it does...boom! Your app's history. If you haven't ever run such a tool, you'll likely be shocked at how much VM its eating.

Dark Photon
04-27-2010, 06:30 AM
Does OpenGL even have 64bit support?

Definitely! If you mean support for 64-bit OSs/CPUs (referring to Alfonse's 2GB CPU VM mention), we've been running NVidia OpenGL on 64-bit Linux for like 6 years now. ATI has had 64-bit Linux support for many years as well. This is not just old news, but ancient history now. 64-bit OSs/CPUs get rid of the 2-4GB CPU memory limitation per process.


I thought that was something coming out in OGL 4?
I think you're mixing up apples and oranges. "That" 64-bit support is support for doubles (float64 values) on the GPU.

glZzz
04-27-2010, 06:26 PM
Then how to upload huge data into texture normally?

Alfonse Reinheart
04-27-2010, 07:24 PM
Then how to upload huge data into texture normally?

You don't. You work around it in other ways.

glZzz
04-27-2010, 09:42 PM
Then how to upload huge data into texture normally?

You don't. You work around it in other ways.

I have ever thought to use the method of texture compression. Maybe it is the only way to solve this problem, but I are wondering whether the data set that the format of vexel is stored as unsign byte can be compressd?

Ashenwraith
04-28-2010, 12:43 AM
I have ever thought to use the method of texture compression. Maybe it is the only way to solve this problem, but I are wondering whether the data set that the format of vexel is stored as unsign byte can be compressd?

Forgive me if I'm mistaken, but are you sure you are not talking about voxels?

I thought vexels was for vector art rasterizations?



Does OpenGL even have 64bit support?

Definitely! If you mean support for 64-bit OSs/CPUs (referring to Alfonse's 2GB CPU VM mention), we've been running NVidia OpenGL on 64-bit Linux for like 6 years now. ATI has had 64-bit Linux support for many years as well. This is not just old news, but ancient history now. 64-bit OSs/CPUs get rid of the 2-4GB CPU memory limitation per process.

Thanks, I'm just getting back into OGL after 10+ years (remember the Super Bible with the planes on it?).

Actually what I was wondering was about a possible limitation in OpenGL on 2-4 gb of vram. We're at the point with SLI/Crossfire where you are getting up there. OpenGL is supposed to be scalable working with mainframes and what not, but I'm not sure if this is included locally.

glZzz
04-28-2010, 07:16 PM
Forgive me if I'm mistaken, but are you sure you are not talking about voxels?

I thought vexels was for vector art rasterizations?



Sorry, I typed a wrong word, it should be "voxel" for 3D data set.