Rendering to more then one destination camera into texture

hello,

i posted this topic slightly changed in gpgpu.org, but the people referred me to opengl.org, to posty again. so maybe you can help me. :slight_smile:

i’m relative new to computer graphics programming, but i’m trying to learn fast and need some piece of information. maybe you can help me or give me a hint for my special issue:

i’m using opengl with cg v1.4. and i want to use one vertex and one fragment shader.
that seems to be quite normal, but now comes the interesting thing: i want to render my scene to texture (no screen output is needed) and i want to render to 8 destination cameras.
that means, in the end, i need 8 textures, where the scene is rendered into. and i need color and depth information.
the next point is performance, so i dont want to use really 8 sequential rendering passes and always going back to cpu to read in the vertices. more nice would be, if i can feed in the modelviewprojection matrices for all 8 camera into 1 vertex shader and reach all information parallel to the fragment shader to process for 8 cameras.
what do you think, whats best for me? you have any hint?

thanks a lot!
chris

You could look into GL_ARB_draw_buffers extension

I have no idea how or if it works with cg. Try GLSL instead. Hovewer, you will have to do multipass because you need 16 render targets, but you can minimize it.

hi!

thanks for the fast reply.

i rendered my scene into texture memory and displayed the texture simply on a quad after that, to test, if its actually rendered.

i found out, that in the fragment shader i can render into COLOR0, COLOR1, COLOR3 and COLOR4 at one time. but in the vertex shader i can just use COLOR0 and COLOR1 ressources.

and i didn’t test for depth values.

but i’m thinking about a possibility to just bind other ressources than COLORx, where to store the values, that are treated by the pipeline the same as the COLORx would be treated.
i just need to generate a projection of the vertices into 8 cameras in vertex shader and pass color and homogenious clip space coordinates forward. in the fragment shader, i need to access the 8 color informations and related depth for each fragment to generate 8 output textures, that encode color and depth (somehow not specified here) information for each output pixel/texel.

so anyway, i just need to do everything in 1 pass, not multipass. and for me its equal, in which registers the values are stored, if its treated right.

can you follow me :wink:
and help me?

thanks a lot,
chris

In OpenGL you can not simultaneously render 8 different views because fragment shader can not change position of rendered fragment. While you can pass several homogenous coordinates to fragment shader using other interpolators you can only use them to calculate several different colors for that fragment not to calculate different position for different color buffer.

Hi Chris, everything allright in Osaka? Well, to your question:

It’s possible to transform your object with different MVP-matrices in the vertex shader, but because of the fact that you’re not able to generate 8 vertices at different locations out of 1 input vertex, your idea will not work that way (vertex shaders are not able to create new vertices!) - you have to render the vertices 8 times if you want to transform and rasterize them out of 8 different views. You are able to write the results of the fragemtnshader into up to 4 different tetures in parallel (more is not possible with current HW (as I know…).

Just use VBOs or simple display lists if you don’t want to transfer your data from CPU to the GPU for every view. If your object doesn’t have tooo many vertices (lets say much less 1.000.000) you should be able to render all 8 passes in realtime.

cheers and greetings to Osaka

anselm

@anselm:
anselm, thank you very much.
nice to hear from you that way :slight_smile:
osaka is great, so i’m staying half year longer!
come and visit me!!

topic related:

yes i see, i thought of that, things are as bad, as you state.

i don’t need to generate new vertices, i just have to do some vertex processing and then render that scene to 8 distinct cameras. in that case, the scene doesn’t change anymore, should just be rendered to 8 views.

i can handle 8 modelview projection matrices to the vertex shader and i’m doing that already, BUT i can only output to POSITION binding semantic. and POSITION is used to do rastering and z-culling, but just for one camera.
conclusion would be, if i pass the data for all 8 cameras in different binding-semantics (like TEXCOORD0-7) to the fragment shader, the would be interpolated and z-culled using the information in POSITION, according to that special camera.

is there a solution to do multipass, but to split the vertex shader into first- and n-pass. so that first-pass is only done once and the output data is stored as vertex buffer objects (VBO) in texture memory and used as input for later rendering passes.
or i have to somehow emulate reasterizer and z-culling or do another trick.

do you have a hint for me??
just performance is my issue!

chris

At the moment it is not possible to save the output of the vertex shader. At least as far as i know. That technique is known as “render to vertex array” and is supposed to be supported in “near” future.

At the moment you are stuck at doing full 8 passes.

However, vertex-shader performance is quite good, so that “should” not be a big problem. Of course i don’t know how complex your calculations are in practice.

Maybe you can scale some of the outputs down, so that you save some pixel processing? Maybe we can help you more, if you tell us, what exactly you are doing.

And, as already said, use VBOs to store your data on the GPU, that should eliminate one big problem.

Jan.