PDA

View Full Version : Rendering of a texture tooooo slow



Jabot
08-21-2009, 01:05 AM
Hi!

I'm working on a small program for video processing on Linux using Qt and OpenGL. The program basically reads a video file and shows each frame and the processed frame, but for the moment the processed frame is only a copy of the input frame. For showing the video I use a texture where I put each frame, but the problem is that the rendering is really slow, about 0.22 seconds for a frame of 640x480 shown in a window of 480x360. In principle, it isn't a problem with the drivers, because I can run opengl examples without problems, so it's more likely to be a problem with my ignorance of opengl :-) This is the code that I use:

For initialization:

unsigned char *tmpData;
tmpData = (unsigned char *)new GLuint[640*480*4* sizeof(unsigned char)];
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glGenTextures(1, &originalTextureName);
glBindTexture(GL_TEXTURE_2D, originalTextureName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 640, 480, 0, GL_RGBA, GL_UNSIGNED_BYTE, tmpData);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTE R,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTE R,GL_LINEAR);
glShadeModel(GL_SMOOTH);
glDisable(GL_NORMALIZE);
glDisable(GL_LIGHTING);
glDisable(GL_BLEND);
delete [] tmpData;


And for showing each frame (this is the code that takes 0.22 seconds):

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, originalTextureName);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0,640,480,GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)originalImage->getGLBits());
glBegin(GL_TRIANGLE_STRIP);
glTexCoord2f(0.0, 0.0);
glVertex2f(-1.0, -1.0);
glTexCoord2f(1.0, 0.0);
glVertex2f(1.0, -1.0);
glTexCoord2f(0.0, 1.0);
glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0);
glVertex2f(1.0, 1.0);
glEnd();
glDisable(GL_TEXTURE_2D);


Thank you very much!!

k_szczech
08-21-2009, 02:08 AM
Two things that you should look into:
I don't know what image library you're using (or is it your own class). Perhaps getGLBits does some conversion that takes time?

Another thing it that texture is not stored in RGBA format on GPU. So driver converts it. You may want to try to pass images in GL_BGR, GL_BGRA or GL_ABGR formats.

If your source image is in GL_RGBA format then at first glance it looks like you would need to convert it to GL_ABGR, but you could actually pass GL_RGBA image and lie to a driver, that this is actually GL_ABGR image.
Then, to use it in rendering you would have to use a fragment shader:

vec4 color = texture2D(myTexture, texCoord).abgr;

Jabot
08-21-2009, 02:50 AM
Two things that you should look into:
I don't know what image library you're using (or is it your own class). Perhaps getGLBits does some conversion that takes time?

getGLBits is a method of a class defined by me. It only returns a pointer to the image data, so the delay there is (or should be :-)) negligible.



Another thing it that texture is not stored in RGBA format on GPU. So driver converts it. You may want to try to pass images in GL_BGR, GL_BGRA or GL_ABGR formats.

If your source image is in GL_RGBA format then at first glance it looks like you would need to convert it to GL_ABGR, but you could actually pass GL_RGBA image and lie to a driver, that this is actually GL_ABGR image.
Then, to use it in rendering you would have to use a fragment shader:

vec4 color = texture2D(myTexture, texCoord).abgr;

OK, thanks a lot, I'll look at that!

todayman
08-21-2009, 08:46 AM
I would also recommend taking a look at Pixel Buffer Objects (PBO), as this kind of memory transfer is what they are for. Essentially, the PBO lets the OpenGL driver move the data without using the CPU as much, and perhaps while you occupied elsewhere. This page (http://www.songho.ca/opengl/gl_pbo.html) gives a good description of PBOs and has some examples.

Jabot
08-27-2009, 03:13 AM
I'll take a look, thanks Todayman!