I would like to render some static geometry from several hundred views. My strategy was this:
- upload geometry to video ram
- instantiate n pbuffers
- for each pbuffer:
a. set camera
b. render geometry
c. glReadPixels
Would this work well if each “iteration” of the “for each” was each done in a separate thread (probably limit the number of threads in-flight to avoid too much context switching)? This is so that the glReadPixels call doesn’t block the rendering calls.
Also, is there anything special I would need to do to share the uploaded geometry? Or is the video memory pointer good across contexts? I suppose I could just share display lists, but I want to guarantee that the data is on the card so that the AGP bus is free for transfering pixels.
Is there a better way to be doing this?
On a tangentially related note: on a modern GPU, geometry setup and transform (the stuff that would be preserved across multiple views, so everything but clipping and the perspective/viewport transforms) is basically of negligible cost? The vertices need to get pushed down the same pipeline regardless?
-Won