Am a avid graphics programmer.
I have designed a simple screen saver kind of application with a cube textured on all six faces and applied rotational & translational forces and made it move inside the screen boundaries… with screen boundary collision check.
Now an trying to rendering a video on all six faces (same video) instead of texture mapping.
How do i accomplish this?
I searched in web, but couldn’t find any tutorial associated with it.
Can you guys help me out??
Will be good if i complete this.
Waiting for your response
You can use the library of your choice to decode the video and then upload each frame data in video memory through a pbo (pixel buffer object) which is the more elegant and performant solution IMO. Then you just have to map the texture to the cube faces as you already did.
That’s because it’s simple; you use the same commands you use to fill any buffer object. You can use BufferSubData, or you can map the buffer and put the decompressed data directly in the mapped buffer. I wouldn’t suggest the latter unless you know for a fact that the decompression routine will not randomly write to the buffer (you should generally assume that mapped pointers should be filled sequentially).