PDA

View Full Version : Question about texture mipmapping



andychen
05-18-2014, 09:29 AM
Excuse me, I have a question about the texture mipmapping.
How can I read back the specific level of the mip-mapped texture which is stored in "GL_RGBA32UI" format ?

I generate a mipmapped texture manually by using shaders. I want to see whether the value stored in the texture is correct. So I need to read back each level of the texture.
I have done a test for the texture with GL_RGBA32F format by using
glGetTexImage(GL_TEXTURE_2D, seeLevel, GL_RGBA, GL_FLOAT, myImage);
It works.

However, when I use the texture with GL_RGBA32UI format, I can not get the correct result...
glGetTexImage(GL_TEXTURE_2D, seeLevel, GL_RGBA_INTEGER, GL_UNSIGNED_INT, myImage);

GClements
05-18-2014, 12:50 PM
How can I read back the specific level of the mip-mapped texture which is stored in "GL_RGBA32UI" format ?


However, when I use the texture with GL_RGBA32UI format, I can not get the correct result...
You're aware that RGBA32UI is 32 bits per component, so 128 bits (16 bytes) per pixel, right? I've seen people assume that it meant 32 bits per pixel.


glGetTexImage(GL_TEXTURE_2D, seeLevel, GL_RGBA_INTEGER, GL_UNSIGNED_INT, myImage);

This should work. You could also try binding the texture level to a framebuffer then using glReadPixels().

andychen
05-18-2014, 08:25 PM
You're aware that RGBA32UI is 32 bits per component, so 128 bits (16 bytes) per pixel, right? I've seen people assume that it meant 32 bits per pixel.



This should work. You could also try binding the texture level to a framebuffer then using glReadPixels().

Thanks for replying. I know that RGBA32UI is 32 bits per component. I have tried the glReadPixels(), and it neither give me the result...
When I change the texture to RGBA32F, it works on everything and I can read back every mipmap level by using both glGetTexImage() andglReadPixels(). However, for RGBA32UI, it only works on level 0.
Does any people come across this problem?

Here is my code:
-----------------------------------------------------------------
RGBA32 version
float *data = (float *) malloc(32*32*sizeof(float)*4);
for(int i = 0 ; i < 32*32*4; i++){
data[i] = 10.0;
}
GLuint texMM;
glGenTextures(1, &texMM);
glBindTexture(GL_TEXTURE_2D, texMM);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 32, 32, 0, GL_RGBA, GL_FLOAT, data);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);

glBindTexture(GL_TEXTURE_2D, texMM);
float *myImage = (float *) malloc(32*32*sizeof(float)*4);
glGetTexImage(GL_TEXTURE_2D, 1, GL_RGBA, GL_FLOAT, myImage); // read back level 1
FILE *fp = fopen("image.raw", "wb");
for (int i = 0; i < 32*32; i++) {
fprintf(fp, "%f %f %f %f\n", myImage[i*4], myImage[i*4+1], myImage[i*4+2], myImage[i*4+3]);
}
fclose(fp);
glBindTexture(GL_TEXTURE_2D, 0);
exit(0);

image.raw is filled with 10.
-----------------------------------------------------------------
RGBA32UI version
unsigned int *data = (unsigned int *) malloc(32*32*sizeof(unsigned int)*4);
for(int i = 0 ; i < 32*32*4; i++){
data[i] = 10;
}
GLuint texMM;
glGenTextures(1, &texMM);
glBindTexture(GL_TEXTURE_2D, texMM);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32UI, 32, 32, 0, GL_RGBA_INTEGER, GL_UNSIGNED_INT, data);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);

glBindTexture(GL_TEXTURE_2D, texMM);
unsigned int *myImage = (unsigned int *) malloc(32*32*sizeof(unsigned int)*4);
glGetTexImage(GL_TEXTURE_2D, 1, GL_RGBA_INTEGER, GL_UNSIGNED_INT, myImage); // read back level 1
FILE *fp = fopen("image.raw", "wb");
for (int i = 0; i < 32*32; i++) {
fprintf(fp, "%u %u %u %u\n", myImage[i*4], myImage[i*4+1], myImage[i*4+2], myImage[i*4+3]);
}
fclose(fp);
glBindTexture(GL_TEXTURE_2D, 0);
exit(0);

image.raw is filled with 3452816845....

Dan Bartlett
05-19-2014, 06:56 AM
3452816845 (0xCDCDCDCD) seems to be a magic number used by Visual C/C++ to mark uninitialized heap memory (http://en.wikipedia.org/wiki/Magic_number_(programming)#Magic_debug_values).

Are these values getting filled by the glGetTexImage call, or is glGetTexImage not returning anything and leaving the value untouched?

Reading back level 1 will only fill 1/4 of the buffer you created, since level 1 is only 16*16 and you allocate enough memory to read back 32*32, which means the rest will be uninitialized. Perhaps you are viewing part of the buffer that hasn't been filled?

andychen
05-19-2014, 07:38 AM
3452816845 (0xCDCDCDCD) seems to be a magic number used by Visual C/C++ to mark uninitialized heap memory (http://en.wikipedia.org/wiki/Magic_number_(programming)#Magic_debug_values).

Are these values getting filled by the glGetTexImage call, or is glGetTexImage not returning anything and leaving the value untouched?

Reading back level 1 will only fill 1/4 of the buffer you created, since level 1 is only 16*16 and you allocate enough memory to read back 32*32, which means the rest will be uninitialized. Perhaps you are viewing part of the buffer that hasn't been filled?

I think those values are uninitialized values. When I set glGetTexImage to see level 0, it gives me value 10. But for other levels (n > 0), it fails. Is there any possible that glGenerateMipmap() does not work on GL_RGBA32UI ?

Or, can anyone help me to test this code fragment?
Thanks!
------------------------------------------------
unsigned int *data = (unsigned int *) malloc(32*32*sizeof(unsigned int)*4);
for(int i = 0 ; i < 32*32*4; i++){
data[i] = 10;
}
GLuint texMM;
glGenTextures(1, &texMM);
glBindTexture(GL_TEXTURE_2D, texMM);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32UI, 32, 32, 0, GL_RGBA_INTEGER, GL_UNSIGNED_INT, data);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);

glBindTexture(GL_TEXTURE_2D, texMM);
unsigned int *myImage = (unsigned int *) malloc(16*16*sizeof(unsigned int)*4);
glGetTexImage(GL_TEXTURE_2D, 1, GL_RGBA_INTEGER, GL_UNSIGNED_INT, myImage); // read back level 1
FILE *fp = fopen("image.raw", "wb");
for (int i = 0; i < 16*16; i++) {
fprintf(fp, "%u %u %u %u\n", myImage[i*4], myImage[i*4+1], myImage[i*4+2], myImage[i*4+3]);
}
fclose(fp);
glBindTexture(GL_TEXTURE_2D, 0);
exit(0);

arekkusu
05-19-2014, 11:09 AM
Is there any possible that glGenerateMipmap() does not work on GL_RGBA32UI?

Your code fragment "works" here (on OSX 10.9.3, Nvidia, Intel, and software renderers.) It produces "10" for all mipmap texels.



I generate a mipmapped texture manually by using shaders.

You should double-check what you expect to happen. Integer formats don't support any filtering, so the mipmap generation is going to effectively throw away 3/4 of the texels at each level (i.e. NEAREST filter reduction). How is this useful to your algorithm?

Dan Bartlett
05-19-2014, 11:51 AM
For me glGetTexImage returns all 0's when using GL_RGBA32UI (Win8, AMD Catalyst drivers, probably a few months old). Perhaps some part of the spec says using integer textures with glGenerateMipmap is undefined behavior, but I'm not sure where. Could equally be a driver bug.

As arekksu mentioned, if you were to draw using integer textures you would need to use NEAREST/NEAREST_MIPMAP_NEAREST filtering as the texture wouldn't be classed as complete without doing so.

andychen
05-19-2014, 08:13 PM
Your code fragment "works" here (on OSX 10.9.3, Nvidia, Intel, and software renderers.) It produces "10" for all mipmap texels.




You should double-check what you expect to happen. Integer formats don't support any filtering, so the mipmap generation is going to effectively throw away 3/4 of the texels at each level (i.e. NEAREST filter reduction). How is this useful to your algorithm?

arekkusu, thank you for your help! The code listed above is just used to help me to describe my question about the use of RGBA32UI. What I want to do is to manually generate a mipmapped texture (RGBA32UI) by using shader.

andychen
05-19-2014, 08:30 PM
For me glGetTexImage returns all 0's when using GL_RGBA32UI (Win8, AMD Catalyst drivers, probably a few months old). Perhaps some part of the spec says using integer textures with glGenerateMipmap is undefined behavior, but I'm not sure where. Could equally be a driver bug.

As arekksu mentioned, if you were to draw using integer textures you would need to use NEAREST/NEAREST_MIPMAP_NEAREST filtering as the texture wouldn't be classed as complete without doing so.

Dan Bartlett, I try to initialize the level 1 mipmaped texture with value 20, and it is not affected by glGenerateMipmap(). Maybe, as you said, glGenerateMipmap() is undefined behavior for integer texture.
-----------------------------------------------------------
unsigned int *data = (unsigned int *) malloc(32*32*sizeof(unsigned int)*4);
for(int i = 0 ; i < 32*32*4; i++){
data[i] = 10;
}
unsigned int *data1 = (unsigned int *) malloc(16*16*sizeof(unsigned int)*4);
for(int i = 0 ; i < 16*16*4; i++){
data1[i] = 20;
}
GLuint texMM;
glGenTextures(1, &texMM);
glBindTexture(GL_TEXTURE_2D, texMM);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32UI, 32, 32, 0, GL_RGBA_INTEGER, GL_UNSIGNED_INT, data);
glTexImage2D(GL_TEXTURE_2D, 1, GL_RGBA32UI, 16, 16, 0, GL_RGBA_INTEGER, GL_UNSIGNED_INT, data1);
glBindTexture(GL_TEXTURE_2D, 0);

glBindTexture(GL_TEXTURE_2D, texMM);
glGenerateMipmap(GL_TEXTURE_2D);
unsigned int *myImage = (unsigned int *) malloc(16*16*sizeof(unsigned int)*4);
glGetTexImage(GL_TEXTURE_2D, 1, GL_RGBA_INTEGER, GL_UNSIGNED_INT, myImage);
FILE *fp = fopen("image.raw", "wb");
for (int i = 0; i < 16*16; i++) {
fprintf(fp, "%u %u %u %u\n", myImage[i*4], myImage[i*4+1], myImage[i*4+2], myImage[i*4+3]);
}
fclose(fp);
glBindTexture(GL_TEXTURE_2D, 0);
exit(0);