I have some problems with my program that renders video stream using opengl and I hope someone here can help me.
My aim was to build GUI for several videocameras which communicate with desktop computer via network. My trouble was to find a way to render 3 streams (25 fps) of video frames with resolution 1600×1200, 640×480, 512×512. The suggested OS was Windows and target desktop computer has rather powerfull hardware.
I implemented GUI on a Qt which has opengl support. I decided to use opengl (maybe it was not the best option) for rendering.
The program has 2 threads:
1.for recieving images via UDP protocol.
2.main thread to handle GUI and render video.
The 1st thread had higher priority and recieves information from network. In GUI thread render function is invoked 25 times a second for outputing last recieved frame on the screen. Originally I rendered images in this way
glPixelZoom( Xzoom, -Yzoom ); //zooming
glRasterPos2d(Xpos,WidgetHeight - Ypos);//screen position
glDrawPixels(Width,Height,GL_LUMINANCE, GL_UNSIGNED_SHORT, image_buffer); //draw image
glFlush();
As expected this way of rendering appeared to be hardware dependent, though target computer was powerful enough to render 2 video streams 512×512 and 640×480 at 25fps. I didn’t test it on high resolution camera as it isn’t ready yet. I found that frames seemed to have random black lines when rendering full speed (25fps), that do not appeared when I set a timer period for 500 ms (2fps). I think the reason is that glDrawPixels function blocks CPU for some time and causes packet loss.Frame transfering in network never stops and CPU delays cause overflowing of the incoming buffer of network adapter.
I used Pixel Buffer Object to prevent wasting of process cycles for copying data to video card memory.
After applying described technique rendering is performed in this way:
- Firstly I create two BuffersARB for every video stream to render (I use double buffering)
glGenBuffersARB(2, pboIds);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[0]);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, frame_size, 0, GL_STREAM_DRAW_ARB);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[1]);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, frame_size, 0, GL_STREAM_DRAW_ARB);
- Every frame is rendered in this way
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[index]);
glPixelZoom( zoomX, -zoomY );
glRasterPos2d(Xpos,WidgetHeight - Ypos);
glDrawPixels(width,height,GL_LUMINANCE, GL_UNSIGNED_SHORT,0);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[nextIndex]);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, frame_size, 0, GL_STREAM_DRAW_ARB);
GLuint* ptr = (GLuint*)glMapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY_ARB);
if(ptr)
{
// update data directly on the mapped buffer
memcpy(ptr,Outbuffer,frame_size );
glUnmapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB); // release pointer to mapping buffer
}
glFlush();
This code works but for some reason in my case it gave no significant improvements .
Currently I avoid gaps in video frames filling them with the content of previous frame, though it is not good way to do it. As well as screen refreshes 25 times a second normally user don’t notices anything wrong. But he could if the object moves rapidly in front of the camera lense. I tried to render textures(glTexSubImage2D) instead of pixel arrays(glDrawPixels) but it also gave no better result. I think the main problem is the time that needed for copying image data from system memory to video card memory, and not the way it is rendered then. Now I’m confused and have no idea but just to reimplement rendering function without opengl, but it will take a lot of time. Maybe someone has any idea of my program improvement.
Previously I spent a lot of time to find information for achieving this result so someone may find this discussion helpful.