Live video and texture2D

Hi,

My application uploads live video (720*576 PAL, interlaced) to a texture which is applied on a screen sized quad. I used texture2D instead of texture2DRect because I thought since there is non-power-of-2 support, why bother.

It worked quite well and I thought that everything was OK. However, when my live video was a news channel with the usual right to left scrolling text at the bottom, the scrolling movement was definitely less smooth than the original.

Is it possible that by using texture coordinates between 0 and 1, there is a rounding error when accessing the texture? If you had a similar experience, do you think I can do anything to avoid it with a texture2D or is the only solution to use video as a texture2DRect and texture coordinates based on its pixel dimensions?

The reason why I am asking if anyone had this problem before is that it is not at all trivial to change from texture2D to texture2DRect. A lot of work would be involved and it would be very helpful for me to know if it is worth the effort.

Thanks.

men just for curisity, how are you streaming video of that size. I’ve worked with FFmpeg and the conversion from yuv420p to RGB it’s very sow. Are you using a special library?

I’m using GL_TEXTURE_RECTANGLE_ARB… but I never try to view a news video. :-
What did you means for smooth? It’s slower?
Did you also stream the audio? The audio is synchronized with the video?

Did you use standard texture or PBO?
For a PBO tutorial look here:
http://developer.download.nvidia.com/SDK/9.5/Samples/samples.html#TexturePerformancePBO

@RGHP: My library (with FFmpeg) run smooth at 800x600 (on my quadcode) and I still have to make a lot of optimization (I have to add PBO support).
Actually I’m sending the pictures with a simple glTexSubImage2D


void VideoTexture::active()
{
   glBindTexture(GL_TEXTURE_RECTANGLE_ARB, m_textureName);
   if(m_changed)
   {
      pthread_mutex_lock(&m_bufferMutex);
      //copy the buffer in the texture
      glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, 0, 0, m_width, m_height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*)m_buffer);
      m_changed = false;
      pthread_mutex_unlock(&m_bufferMutex);
   }
}

in the decode function (on another thread) I simply decode the frame on the m_buffer and set the m_changed flag on true.

// Decode video frame
avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size);
// Did we get a video frame?
if(frameFinished)
{
   pthread_mutex_lock(&instance->m_bufferMutex);
   // Convert the image from its native format to RGB
   sws_scale(instance->m_imageConvert, pFrame->data, pFrame->linesize,
	0, instance->m_height, data, stride);
   instance->m_changed = true;
   pthread_mutex_unlock(&instance->m_bufferMutex);
}

I also want to try to decode the image with a shader.

I also tried with TEXTURE_2D and I didn’t found any difference (apart the UV coord)

When I finish if it works good I can post it on a opensource repository. :slight_smile:

This is live video, not compressed. I upload it as 24-bit RGB using alternating pbos.

Not smooth means there are very small jumps in playback (unnoticeable in almost anything but the continuous scrolling I described). This has absolutely nothing to do with the upload which works perfectly.

This jumps could be visible because of refresh rate difference. Your monitor is 60Hz but source is 25i. If you can render at full speed (60 FPS) some frames from source would be rendered 3 times per second and all other 2 times per second.

Do you use any filtering? If you use for example linear filtering interlaced image would be fiteret and you can see ghosts on screen, very noticable on horizontal scrolls. Also any interlaced image downscale would have artifacts.

btw… how do you grab live video? Using some TV card? Any DirectShow stuff involved? If yes, did you fix sync source?

Hi,

No, the output is 50Hz, I do not sync at all to the VGA monitor. I read back from the GPU at 50fps (twice the required speed), interlace then transfer the finished image back to the video card. I use various cards, all of them broadcast quality capture boards, e.g. the Decklink from Blackmagic. The whole rendering is synced top this video card and not the GPU.

I tried every filtering method. I cannot notice any difference between nearest and linear, at least not when video is applied on a screen sized quad (which I think is logical: neither minification nor magnification takes place).

Once again, I am quite certain there is no sync problem. All other stuff I render with opengl than read back looks perfectly smooth in movement.

(BTW Directshow: I asked for help about avi playback using Directshow a few months ago. I remember that I got a very good tip from you then and it is working perfectly. Thanks once again.)

the output is 50Hz, I do not sync at all to the VGA monitor

You mean your vga monitor is at 50hz ?
If not, it should.
And you should vsync too.

Anything else will show slight time jerkiness, especially visible on scrollings.

Hmm, I think I did not describe the situation very well. I also have an output on the VGA but it is only for preview. I do not even care about how it looks. The real output appears on the video card, and the render is synced to that.

On this video card, playback is fine and smooth. Any graphics the application itself creates and renders using opengl, even continuously scrolling elements move perfectly fine. If I display the live video that is being captured from the input of the card, upload it as a texture and apply it on a full screen quad, I should see the video input without any change in any of the pixels on the output. But I don’t. The change is hardly noticeable, but it is definitely there, because the scrolling text of a news channel shows it. I suspect it is due to rounding error because of normalized texture coordinates and I wanted to know if there is a remedy or can I expect a texture2dRect to solve this problem.

> rounding error because of normalized texture coordinates
Very unlikely.
Generate 1 pixel-wide black/white checkerboard to the output, you will see if there are coordinates problems.

Screenshot please.

OK, I will try the checkerboard although it will be impossible to check visually. Graphics like that as an interlaced video signal (i.e. TV) will look like one big flash.

Thanks for the tip and making me suspect that I was wrong in the first place. I tried the checkerboard, and it looked perfect. Finally I found out that there were not one but two problems:

An ordinary bug that made me think it was a texture coordinate thing instead of the real culprit, a shader I made to improve hardware antialiasing.