PDA

View Full Version : using opengl to render video



voichinin
09-01-2011, 07:36 AM
I have some problems with my program that renders video stream using opengl and I hope someone here can help me.
My aim was to build GUI for several videocameras which communicate with desktop computer via network. My trouble was to find a way to render 3 streams (25 fps) of video frames with resolution 1600◊1200, 640◊480, 512◊512. The suggested OS was Windows and target desktop computer has rather powerfull hardware.
I implemented GUI on a Qt which has opengl support. I decided to use opengl (maybe it was not the best option) for rendering.

The program has 2 threads:
1.for recieving images via UDP protocol.
2.main thread to handle GUI and render video.
The 1st thread had higher priority and recieves information from network. In GUI thread render function is invoked 25 times a second for outputing last recieved frame on the screen. Originally I rendered images in this way

glPixelZoom( Xzoom, -Yzoom ); //zooming
glRasterPos2d(Xpos,WidgetHeight - Ypos);//screen position
glDrawPixels(Width,Height,GL_LUMINANCE, GL_UNSIGNED_SHORT, image_buffer); //draw image
glFlush();

As expected this way of rendering appeared to be hardware dependent, though target computer was powerful enough to render 2 video streams 512◊512 and 640◊480 at 25fps. I didnít test it on high resolution camera as it isnít ready yet. I found that frames seemed to have random black lines when rendering full speed (25fps), that do not appeared when I set a timer period for 500 ms (2fps). I think the reason is that glDrawPixels function blocks CPU for some time and causes packet loss.Frame transfering in network never stops and CPU delays cause overflowing of the incoming buffer of network adapter.

I used Pixel Buffer Object (http://www.songho.ca/opengl/gl_pbo.html) to prevent wasting of process cycles for copying data to video card memory.
After applying described technique rendering is performed in this way:
1. Firstly I create two BuffersARB for every video stream to render (I use double buffering)


glGenBuffersARB(2, pboIds);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[0]);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, frame_size, 0, GL_STREAM_DRAW_ARB);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[1]);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, frame_size, 0, GL_STREAM_DRAW_ARB);

2. Every frame is rendered in this way


glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[index]);
glPixelZoom( zoomX, -zoomY );
glRasterPos2d(Xpos,WidgetHeight - Ypos);
glDrawPixels(width,height,GL_LUMINANCE, GL_UNSIGNED_SHORT,0);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[nextIndex]);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, frame_size, 0, GL_STREAM_DRAW_ARB);
GLuint* ptr = (GLuint*)glMapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB , GL_WRITE_ONLY_ARB);
if(ptr)
{
// update data directly on the mapped buffer
memcpy(ptr,Outbuffer,frame_size );
glUnmapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB); // release pointer to mapping buffer
}
glFlush();

This code works but for some reason in my case it gave no significant improvements :( .
Currently I avoid gaps in video frames filling them with the content of previous frame, though it is not good way to do it. As well as screen refreshes 25 times a second normally user donít notices anything wrong. But he could if the object moves rapidly in front of the camera lense. I tried to render textures(glTexSubImage2D) instead of pixel arrays(glDrawPixels) but it also gave no better result. I think the main problem is the time that needed for copying image data from system memory to video card memory, and not the way it is rendered then. Now I'm confused :confused: and have no idea but just to reimplement rendering function without opengl, but it will take a lot of time. Maybe someone has any idea of my program improvement.
Previously I spent a lot of time to find information for achieving this result so someone may find this discussion helpful.

Ilian Dinev
09-01-2011, 11:36 AM
glPixelZoom( zoomX, -zoomY ); // software
glRasterPos2d(Xpos,WidgetHeight - Ypos); // software
glDrawPixels(width,height,GL_LUMINANCE, GL_UNSIGNED_SHORT,0); // software

Ouch.

Upload to a texture, draw a quad with that texture.
But try to follow the PBO demo
http://developer.download.nvidia.com/SDK/9.5/Samples/samples.html#TexturePerformancePBO

ZbuffeR
09-01-2011, 11:42 AM
They didn't told you at school that UDP is bad :) ?


random black lines when rendering full speed (25fps), that do not appeared when I set a timer period for 500 ms (2fps)
To me this sounds more like sync problem between your threads, or maybe using single buffer instead of double buffer on GL side ?

voichinin
09-02-2011, 03:01 AM
Hi, Ilian Dinev. You mean that glDrawBuffers needs CPU cycles while glTexSubImage2D doesn't? Provide a link please, cause I haven't found such information.

Upload to a texture, draw a quad with that texture.
As I mentioned above I tried using textures and they have the same problems. Really they seems to work very alike. Textures is more powerfull but I don't need their functionality.

And I used PBO + Textures as well. I followed link and the sample seems to be alike my code:


glBindTexture(GL_TEXTURE_2D, this->frames[i].textureId);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, this->frames[i].pboIds[this->frames[i].pboIndex]);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,this->frames[i].p->width,
this->frames[i].p->height,
this->frames[i].format,this->frames[i].type,0);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, this->frames[i].pboIds[nextIndex]);

glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, this->frames[i].bytesize(), 0, GL_STREAM_DRAW_ARB);
GLuint* ptr = (GLuint*)glMapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB , GL_WRITE_ONLY_ARB);
if(ptr)
{
// update data directly on the mapped buffer
memcpy(ptr,Outbuffer, this->frames[i].bytesize());
glUnmapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB); // release pointer to mapping buffer
}

voichinin
09-02-2011, 03:44 AM
Hi.

They didn't told you at school that UDP is bad smile ?
:) Actually, it is simple and it wasn't depend on me.


To me this sounds more like sync problem between your threads, or maybe using single buffer instead of double buffer on GL side ?
I don't think there is a problem with sync. Each packet has a line of image and the number of the line. Listnening thread pastes packet into the correct place of frame. When '0' line is recieved the next buffer is filling. If recieving function is not envoked in time it may cause buffer overflowing. When render function is invoked it outputs the last recieved frame to the screen. The interesting thing: program that uses OpenCV(windows functions) for rendering works even better though that program has only one thread and there should be more blocking period.

ZbuffeR
09-02-2011, 04:16 AM
And why do you need threads anyway ?

"When '0' line is recieved the next buffer is filling" -> what about packet order which is not guaranteed in UDP ? You would switch to a new buffer, then still receive older lines ? Same in the reverse order, you could start to receive lines for next frame before receiving the new '0' line.

Not seeing the whole picture here it does seem you overcomplicate things.
If you still want to use OpenGL (if you do not need any scaling or manipulation on the video image, GL does not sound very interesting) I would try a single threaded system, and/or with glTexSubImage2D for each line as it comes from udp packet, with frame numbering if possible.

voichinin
09-05-2011, 02:06 AM
what about packet order which is not guaranteed in UDP? You would switch to a new buffer, then still receive older lines?

Maybe you're right.


I would try a single threaded system, and/or with glTexSubImage2D for each line as it comes from udp packet, with frame numbering if possible

If I use 1 thread, I would waste CPU time when recieving packets. I think rendering each line independently would not be good as on the screen at one moment there could be lines from different frames.
But frame numbering is a good idea. Thank you.

PS. Actually I need scaling. I am using glPixelZoom for these purposes. I want to use OpenGL as:
1. I don't know any other good way for rendering pixel arrays :(
2. I already have a program that works somehow :)

ZbuffeR
09-05-2011, 05:50 AM
glTexSubImage2D renders nothing unless you actually draw something it.
I believe glPixelZoom is less efficient than drawing a textured quad.