Hello to you all,
Like a lot of people, i’m a complete newbie in opengl, and i need your help to point me out in the right direction. I’m working on a software which combines multiple streams (video or images) and shows them up on the screen.
Till now I used OpenCL for all the image processing ( scaling, filtering etc.) and summed all the images up in a final kernel which would take two images, combine them and use the result as a input in a loop.
Basically, say my kernel name is “overlap”, and i call it n times:
- Image1+Image2=Result
- Result +Image3=Result
- Result +Image4=Result
…
n. Result +Imagen=Result
Now i wanted to migrate all the code in a 3D space to add some effects, that’s the reason i started using OpenGL. For now I learned how to combine two images together by using a simple vertex+fragment shader.
My question is how can I iterate this process, and use the output of the fragment shader as an input of the same process. What i mean is transforming the output in a texture, assigning it to a quand and use it as input for next iteration.
Or have vertex+fragment shaders which will take in input a variable number of images, something i would prefer actually as opposed to the reasoning i’m using right now.
The end goal for me is to have option to dynamically add or delete a “layer”
What am I missing, how can this be done in OpenGL ?