-
Junior Member
Regular Contributor
Re: Streaming video to OpenGl texture
Sorry, another quick question:
I read somewhere that if I have a PBO and a mapped buffer, I can copy to it in another thread. Is this correct?
Thanks.
-
Advanced Member
Frequent Contributor
Re: Streaming video to OpenGl texture
yes... create pbo pool, map all pbo buffers and select one of the pointers from decoder thread, copy decoded frame, notify pbo pol about that. In render loop, in next frame, check is there any full pbo, unmap it and upload texture data. After that mark pbo as free, and map it in next frame.
Using this tech, you can stream several video feed at same time.
-
Senior Member
OpenGL Guru
Re: Streaming video to OpenGl texture
yes, mapping a buffer gives you a memory pointer and you can use it in whatever thread you like. The only restriction is that you should only map/unmap it in the GL context thread, and use plain old mutex's to ensure your 2 threads don't interfere with each other (once you unmap it in the GL thread that pointer becomes invalid).
-
Junior Member
Regular Contributor
Re: Streaming video to OpenGl texture
I did not yet implement the texturerenderer, but I thought moving the upload to the pbo to another thread is too interesting not to try it first.
It works very well. I could not measure exactly, but even by watching the task manager, it visibly lightened the load on the CPU core that handles the gl thread.
But there is one thing I cannot explain. It could be a GLSL problem and off topic here, but since you were very nice giving me excellent advice, I thought I would ask it here:
I recently rewrote my renderer in a shader. All lights out of a possible 8 can be per vertex or per pixel and any of the usual three types: directional, point or spot.
It is not surprising that a per pixel spot is the slowest to render. However, I cannot explain why using only one spot, maybe 80000 polygons it adds significantly (~20-25%) to the load of the CPU core handling the gl thread if I change the light from per vertex to per pixel.
What has the CPU got to do with a more complicated fragment shader?
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules