Hi!
I’m doing some simple tests to determine the size in memory
of a 3D texture.
I got these results from Video Memory Watcher.
A RGB 3D texture, 256256256 texels, using GL_RGB16F_ARB to store
it on the video memory, should take:
256256256*3(RGB)*2(16bits) = 96Mo
But, when binding the fbo, the free video memory amount decrease
by 254Mo (158Mo more than 96Mo)!
Does anybody know why?
I working on NVDIA 8800 GTX.
Thanks in advance
Here is a short piece of code that can replay my problem:
GLuint fbo_id;
GLuint texture_id;
// Bind FBO.
glGenFramebuffersEXT (1, &fbo_id);
glBindFramebufferEXT (GL_FRAMEBUFFER_EXT, fbo_id);
// Texture creation.
glGenTextures (1, &texture_id);
glBindTexture (GL_TEXTURE_3D, texture_id);
glTexParameteri (GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage3D (GL_TEXTURE_3D, 0, GL_RGB16F_ARB, 256, 256, 256, 0, GL_RGB, GL_FLOAT, 0);
// Texture attachment.
glFramebufferTexture3DEXT (GL_FRAMEBUFFER_EXT,
GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_3D, texture_id, /*mipmap_level*/0, /*slice*/0);
fbo_check_validity();
No other textures or fbo are present on the memory.
Maybe because it’s GL_RGB16F_ARB . The numbers kind of confirm this. It’s always internally converted to GL_RGBA16F_ARB
Yes, I saw that, but even with RGBA 16 bits, it should take
256256256*4(RGBA)*2(16bits) = 128Mo, not 254 Mo