Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 2 FirstFirst 12
Results 11 to 14 of 14

Thread: Streaming video to OpenGl texture

  1. #11
    Junior Member Regular Contributor
    Join Date
    Feb 2007
    Location
    Hungary
    Posts
    168

    Re: Streaming video to OpenGl texture

    Sorry, another quick question:

    I read somewhere that if I have a PBO and a mapped buffer, I can copy to it in another thread. Is this correct?

    Thanks.

  2. #12
    Advanced Member Frequent Contributor yooyo's Avatar
    Join Date
    Apr 2003
    Location
    Belgrade, Serbia
    Posts
    872

    Re: Streaming video to OpenGl texture

    yes... create pbo pool, map all pbo buffers and select one of the pointers from decoder thread, copy decoded frame, notify pbo pol about that. In render loop, in next frame, check is there any full pbo, unmap it and upload texture data. After that mark pbo as free, and map it in next frame.

    Using this tech, you can stream several video feed at same time.

  3. #13
    Senior Member OpenGL Guru knackered's Avatar
    Join Date
    Aug 2001
    Location
    UK
    Posts
    2,833

    Re: Streaming video to OpenGl texture

    yes, mapping a buffer gives you a memory pointer and you can use it in whatever thread you like. The only restriction is that you should only map/unmap it in the GL context thread, and use plain old mutex's to ensure your 2 threads don't interfere with each other (once you unmap it in the GL thread that pointer becomes invalid).

  4. #14
    Junior Member Regular Contributor
    Join Date
    Feb 2007
    Location
    Hungary
    Posts
    168

    Re: Streaming video to OpenGl texture

    I did not yet implement the texturerenderer, but I thought moving the upload to the pbo to another thread is too interesting not to try it first.

    It works very well. I could not measure exactly, but even by watching the task manager, it visibly lightened the load on the CPU core that handles the gl thread.

    But there is one thing I cannot explain. It could be a GLSL problem and off topic here, but since you were very nice giving me excellent advice, I thought I would ask it here:

    I recently rewrote my renderer in a shader. All lights out of a possible 8 can be per vertex or per pixel and any of the usual three types: directional, point or spot.

    It is not surprising that a per pixel spot is the slowest to render. However, I cannot explain why using only one spot, maybe 80000 polygons it adds significantly (~20-25%) to the load of the CPU core handling the gl thread if I change the light from per vertex to per pixel.

    What has the CPU got to do with a more complicated fragment shader?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •