Rendering of a texture tooooo slow

Hi!

I’m working on a small program for video processing on Linux using Qt and OpenGL. The program basically reads a video file and shows each frame and the processed frame, but for the moment the processed frame is only a copy of the input frame. For showing the video I use a texture where I put each frame, but the problem is that the rendering is really slow, about 0.22 seconds for a frame of 640x480 shown in a window of 480x360. In principle, it isn’t a problem with the drivers, because I can run opengl examples without problems, so it’s more likely to be a problem with my ignorance of opengl :slight_smile: This is the code that I use:

For initialization:

unsigned char tmpData;
tmpData = (unsigned char )new GLuint[640480
4* sizeof(unsigned char)];
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glGenTextures(1, &originalTextureName);
glBindTexture(GL_TEXTURE_2D, originalTextureName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 640, 480, 0, GL_RGBA, GL_UNSIGNED_BYTE, tmpData);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glShadeModel(GL_SMOOTH);
glDisable(GL_NORMALIZE);
glDisable(GL_LIGHTING);
glDisable(GL_BLEND);
delete [] tmpData;

And for showing each frame (this is the code that takes 0.22 seconds):

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, originalTextureName);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0,640,480,GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)originalImage->getGLBits());
glBegin(GL_TRIANGLE_STRIP);
glTexCoord2f(0.0, 0.0);
glVertex2f(-1.0, -1.0);
glTexCoord2f(1.0, 0.0);
glVertex2f(1.0, -1.0);
glTexCoord2f(0.0, 1.0);
glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0);
glVertex2f(1.0, 1.0);
glEnd();
glDisable(GL_TEXTURE_2D);

Thank you very much!!

Two things that you should look into:
I don’t know what image library you’re using (or is it your own class). Perhaps getGLBits does some conversion that takes time?

Another thing it that texture is not stored in RGBA format on GPU. So driver converts it. You may want to try to pass images in GL_BGR, GL_BGRA or GL_ABGR formats.

If your source image is in GL_RGBA format then at first glance it looks like you would need to convert it to GL_ABGR, but you could actually pass GL_RGBA image and lie to a driver, that this is actually GL_ABGR image.
Then, to use it in rendering you would have to use a fragment shader:

vec4 color = texture2D(myTexture, texCoord).abgr;

getGLBits is a method of a class defined by me. It only returns a pointer to the image data, so the delay there is (or should be :-)) negligible.

OK, thanks a lot, I’ll look at that!

I would also recommend taking a look at Pixel Buffer Objects (PBO), as this kind of memory transfer is what they are for. Essentially, the PBO lets the OpenGL driver move the data without using the CPU as much, and perhaps while you occupied elsewhere. This page gives a good description of PBOs and has some examples.

I’ll take a look, thanks Todayman!