distributed rendering (on hardware) - realizeable ??

hi @ all.

Yes, i know this thesis might sound a big silly, but today i philosophized over a method of realizing “some kind of distributed” rendering, on more than once PC, connected over a 100/1000Mbit network.
But the problem would be, in my opinion, that i have no chance to use hardware-accelerated OpenGL to that ?!?!
I came to the point, where i thought that it could be possible to render different “parts” of the scene, saving the final color & depth buffer from the partially rendered images and then “blend” this different picture-pieces from different machines together, to one final image.
As you can guess, i’d like to use OpenGL for doing this and no selfwritten software-rasterizer.
The main problem while using OpenGL for a purpose like this is that i have no chance to get the calculated data back from the graphic-board to send them over the network to do other processings on it.

What are you thinking of this idea ? Has anyone heard of a website or project, where this idea has been translated to working code ?

[This message has been edited by DJSnow (edited 07-23-2003).]

The MPK (multipipe) library on SGIs is fully functionnal and does exactly that, except it’s a unique system with many graphics cards (so no need to use the network to synchronize). It can even composite an image in weird ways.

Y.

You can render sub parts of a scene graph on various computers and then cretae an IBR object from each depth buffer and then combine all IBR objects on a stand alone computer.

Hi,

Take a look at Chromium. This open source project supports many different ways of “splitting” the rendering across a number of PC in a cluster. You’ll find information on the project on sourceforge.

Best regards,

Niels

@Ysaneye / ToolTechn / Niels… :

hey, thanks.
i will check the information, you mentioned
bye