PDA

View Full Version : How to copy backbuffer contents into a texture?



glSun
08-26-2010, 05:00 AM
Hi,

I am trying to copy the contents of the backbuffer into a texture to reuse it later. However, all my attempts failed, because I have not enough know-how of how to capture textures from the backbuffer.

Could some please fill-in/correct the code below? I need help with steps 4 and 6.


#define CX 144
#define CY 144

GLuint img;

// Step 1: create window
glutInitWindowSize (CX, CY);
glutInitDisplayMode ( GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutCreateWindow ("glut");
glClearColor(0.0f, 0.0f, 0.0f, 0.5f);

// Step 2: clear the scene
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Step 3: draw a red square
glColor3f(1, 0, 0);
glMatrixMode (GL_MODELVIEW);
glRectf (-0.2f, -0.2f, 0.2f, 0.2f);

// Step 4: save backbuffer contents into texture (save the whole visible area)
glGenTextures(1,&img);
glBindTexture(GL_TEXTURE_2D,img);
// And now? Use glCopyTexImage2D? Use glCopyTexSubImage2D? How does it work?

// Step 5: clear the scene again
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Step 6: draw a quad with the size of the whole screen using the captured texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,img);
glBegin(GL_QUADS);
glTexCoord2f(?,?); glVertex2i(-1,1);
glTexCoord2f(?,?); glVertex2i(1,1);
glTexCoord2f(?,?); glVertex2i(1,-1);
glTexCoord2f(?,?); glVertex2i(-1,-1);
glEnd();
glDisable(GL_TEXTURE_2D);

ZbuffeR
08-26-2010, 05:19 AM
Step 4: it works like that :
http://www.opengl.org/sdk/docs/man/xhtml/glCopyTexImage2D.xml

Step 6: depends on your projection, but with default projection x and y will be [-1;1] accross the screen

glSun
08-26-2010, 06:37 AM
Step 4: it works like that :
http://www.opengl.org/sdk/docs/man/xhtml/glCopyTexImage2D.xml


Sorry, I should have informed you that I have passed the RTFM phase already ;-) Seriously, I have read the manual, but I don't understand it. I am a complete beginner.
I have googled for examples and I have tried some code I found on the internet, but I just can't seem to make it work. I miss some fundamental understanding of OpenGL here.

Therefore I'd really appreciate if someone could post some example code for step 4 and 6 to help better understand how it works.

mhagain
08-26-2010, 09:59 AM
Tell us about the texture you're copying it into. What's it's format, it's width, it's height? Also, can you show us the glCopyTexSubimage2D line that didn't work?

BionicBytes
08-26-2010, 10:04 AM
Step 4 - read back buffer into a texture is easy:

Bind (texture);
glCopyTexSubImage2D(GL_TEXTURE_2D,0,0,0, 0,0,texture.image.sizex, texture.image.sizey);


Step 6 - display the quad/texture
You can't do what you have done:
glBegin(GL_QUADS);
glTexCoord2f(?,?); glVertex2i(-1,1);
glTexCoord2f(?,?); glVertex2i(1,1);
glTexCoord2f(?,?); glVertex2i(1,-1);
glTexCoord2f(?,?); glVertex2i(-1,-1);
glEnd();

...because the vertex positions will get multiplied by the ModelView matrix and projection matrices by the fixed function pipeline and or shaders. Are you using GL CORE profile or COMPATABILITY?

Instead, you need to pass in the width and height of the quad you are drawing:

glBegin (GL_Quads);
glTexCoord2f (0,1); glVertex2f (x,y); //top left
glTexCoord2f (1,1); glVertex2f (x+w,y); //top right
glTexCoord2f (1,0); glVertex2f (x+w,y+h); //bottom right
glTexCoord2f (0,0); glVertex2f (x,y+h); //bottom left
glEnd;

glSun
08-26-2010, 10:43 AM
Thanks for your comments. I tried to apply what you have said. Now I see something on the screen, but it is now what I had expected. My code looks like this now:


// Step 4: save backbuffer contents into texture (save the whole visible area)
glGenTextures(1,&img);
glBindTexture(GL_TEXTURE_2D,img);
glCopyTexSubImage2D(GL_TEXTURE_2D,0,0,0, 0,0,CX,CY);

// Step 5: clear the scene again
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Step 6: draw a quad with the size of the whole screen using the captured texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,img);
glBegin (GL_QUADS);
glTexCoord2f (0,1); glVertex2f (-1,1); //top left
glTexCoord2f (1,1); glVertex2f (1,1); //top right
glTexCoord2f (1,0); glVertex2f (1,-1); //bottom right
glTexCoord2f (0,0); glVertex2f (-1,-1); //bottom left
glEnd();
glDisable(GL_TEXTURE_2D);

glutSwapBuffers();

Since I want to capture the whole visible area of my window, I set the coordinates of glVertex2f accordingly (applying the width of the texture as advised).

However, the result is that the red sqare is extended over the whole visible area. Actually I wanted to capture also the black part, that's why I have set the width and height in glCopyTexSubImage2D to CX and CY (width and height of the window)

See the attached images (I expected that "before" and "after" look equally)

Before the capture:
http://img842.imageshack.us/img842/2332/beforez.jpg

After rendering the captured texture:
http://img691.imageshack.us/img691/65/afterqb.jpg

Thanks in advance for any help!


EDIT - @BionicBytes: How can I tell whether I am using GL CORE profile or COMPATABILITY?

BionicBytes
08-26-2010, 03:05 PM
Compatability or Core is an option set in the context creation code. You had to have specified this in your initialisation. How did you do that?
I guess you have compatibility because you don't appear to have shaders running and are using immediate mode (with quads). These were removed from the 'core' profile!

mhagain
08-26-2010, 03:23 PM
What's your viewport x, y, width and height? And the projection matrix you're using when drawing? Also, the values of CX and CY? And also the img texture should be just created once during startup (or if the window size changes), not every frame.

glSun
08-27-2010, 12:19 AM
Compatability or Core is an option set in the context creation code. You had to have specified this in your initialisation. How did you do that?

All initialization I do is described in step 1. Do I have do do more than that?


// Step 1: create window
glutInitWindowSize (CX, CY);
glutInitDisplayMode ( GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutCreateWindow ("glut");
glClearColor(0.0f, 0.0f, 0.0f, 0.5f);



What's your viewport x, y, width and height? And the projection matrix you're using when drawing? Also, the values of CX and CY? And also the img texture should be just created once during startup (or if the window size changes), not every frame.

Viewport: Not specified. I really use only the code posted above. Should I set the viewport to the visible area of the window?

Projection matrix: Not specified. What would be a useful value here?

CX, CY: Both are 144. That is also the width and height of the window. I want to capture the complete client area of the window.

Regarding the texture: I will change it so that the texture is created only once. What is the problem if it is created every frame? Memory leaks?

BionicBytes
08-27-2010, 04:46 AM
Your using glut - a helper package. I guess that does some defaults.
I suggest you google HeNe which has some excellent begginer tutorials which will take you from the absolute basics to intermediate level. You are missing the fundaments of the whole GL thing and it would take a series of tutorials to explain what those things are!
I suggest you read up on projection matrix, modelview matrix and view port. Then you'll be able to anser your own questions!

glSun
08-27-2010, 04:47 AM
Meanwhile I found out that glCopyTexSubImage2D returns the error GL_INVALID_OPERATION. Checking the manual, I found "GL_INVALID_OPERATION is generated if the texture array has not been defined by a previous glTexImage2D or glCopyTexImage2D operation."

So I added glTexImage2D to step 1. That fixed the error message, but the output remains unchanged. To me this means that the screen data is copied into the texture, but the texture is not used. Instead the quad is rendered using the currently set color (red).

Any ideas why the texture is not used?

Code of step one looks like this now:


// Step 1: create window
glutInitWindowSize (CX, CY);
glutInitDisplayMode ( GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutCreateWindow ("glut");
glClearColor(0.0f, 0.0f, 0.0f, 0.5f);

glGenTextures(1,&img);
GLvoid* pTexBuf = (GLvoid*)malloc ( CX*CY*4);
glBindTexture(GL_TEXTURE_2D,img);
glTexImage2D( GL_TEXTURE_2D,
0,
GL_RGBA,
CX,
CY,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
pTexBuf);

ZbuffeR
08-27-2010, 10:54 AM
1) by default GL textures need mipmaps. Here you only fill level 0, so texture is considered incomplete, and rendered solid white. Try adding these to img texture :
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

2) in step 6, the current color is still red, modulated by the solid white texture, ending up solid red. Call this before rendering the quad to see the real texture :
glColor3f(1, 1, 1);

glSun
08-28-2010, 12:14 AM
Thanks, ZBuffeR. I did what you said and additionally I identified below two errors:

1) Texture coordinates not set correctly (I found a good example on the web and now understand how it is used)

2) glTexImage2D needs power of two width and height

Now it works! The new rendering code looks like this.


glPushMatrix();
glViewport ( 0, 0, CX, CY );
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, img);
glColor3f(1, 1, 1);
glBegin(GL_QUADS); // full screen square
glTexCoord2d ( 0, 1.0-texCYRatio ); glVertex2d ( -1, 1 );
glTexCoord2d ( 1.0-texCXRatio, 1.0-texCYRatio); glVertex2d ( 1, 1 );
glTexCoord2d ( 1.0-texCXRatio, 0 ); glVertex2d ( 1, -1 );
glTexCoord2d ( 0, 0 ); glVertex2d ( -1, -1 );
glEnd();
glDisable(GL_TEXTURE_2D);
glPopMatrix();
glFlush();

NOTE: the texCXRatio and texCYRatio values are used to compensate the larger texture size that is caused by having power of two width and height. It works just fine with any window size I chose.

---------------------------------

Although the ouput looks now as expected, the performance is poor. I need 160ms to render a 800x600 pixel texture. I think that's caused, because I have set the texture quality settings in the driver to highest quality.

But still I wonder: using a RAM(!) buffer and glReadPixels/glDrawPixels, it takes only 35ms to render the 800x600 image. That's four times faster than using the texture in the GPU memory.

I already tried to optimize the texture settings as follows:


glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE /*GL_MODULATE*/ );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );

But tweaking those parameters seems to have no effect on the preformance. It remains at 159-160ms. I thought I'd benefit from the texture approach, because the buffer is held in the GPU's memory. But now the glDrawPixels is faster, even though it transfers the memory from CPU to GPU every frame. How can that be?

ZbuffeR
08-28-2010, 12:28 AM
2) glTexImage2D needs power of two width and height

Only on old/basic hardware. That would explain the slow performance.

So, what is your hardware ? GPU, vram, cpu, ram, OS ?

glSun
08-28-2010, 12:49 AM
GPU: ATI Mobility Radeon 9000
VRAM: 32GB
CPU: Intel Pentium Mobile, 1.4GHz
RAM: 1GB
OS: Windows XP SP3

Indeed, I am not using state of the art hardware ;-) I understand that I cannot achieve world class perfomance on this machine. But I would like to understand why the glDrawPixels is faster than using a texture and a QUAD.

ZbuffeR
08-28-2010, 01:01 AM
Then there is the whole "how do you measure performance" topic, which is quite complex when a GPU and a CPU work together.
http://www.opengl.org/wiki/Performance

There may ba a small performance hit when first using the texture, etc.

glSun
08-28-2010, 01:29 AM
Thanks. I took a look at the Wiki and I think I have the solution now. I tried to disable the fragment shader:


glDisable ( 0x8920 /*FRAGMENT_SHADER_ATI*/ );

That reduced the quality of the texture, but pulled up the speed to 1ms. Then I removed that line of code, but... nothing changed. They texture quality remained bad. So I changed the driver settings again and suddenly the texture quality is good and the speed remained at 1ms!

Must have been some driver "hang" that was release by disabling the fragment shader once. Honestly, I have no idea ;-)