i posted this topic slightly changed in gpgpu.org, but the people referred me to opengl.org, to posty again. so maybe you can help me.
i'm relative new to computer graphics programming, but i'm trying to learn fast and need some piece of information. maybe you can help me or give me a hint for my special issue:
i'm using opengl with cg v1.4. and i want to use one vertex and one fragment shader.
that seems to be quite normal, but now comes the interesting thing: i want to render my scene to texture (no screen output is needed) and i want to render to 8 destination cameras.
that means, in the end, i need 8 textures, where the scene is rendered into. and i need color and depth information.
the next point is performance, so i dont want to use really 8 sequential rendering passes and always going back to cpu to read in the vertices. more nice would be, if i can feed in the modelviewprojection matrices for all 8 camera into 1 vertex shader and reach all information parallel to the fragment shader to process for 8 cameras.
what do you think, whats best for me? you have any hint?
thanks a lot!