Just add … the “engine” allowed for a number of different pipelines … so in the main most of the work was done using the “workbox” notion, but you could also then switch to a “particle generator” pipeline,
which would render the particles in to the current scene, and a third method was to be able to render a hierachical structure of objects. You decide the order of pipelines (and which ones) …
Of course each object had a simple flag to say render or not, and update or not … so once the pipeline(s) were rendering the scene you could still switch objects on and off (of course, how you decided to switch on/off was up to you … for me it was an “artistic”
decision, for others it might be a test to see if the object bounding box was onscreen or not etc etc).
My app was quite well bounded, so the engine was not required to be totally generic. I always knew what was to be onscreen or not.
The hard part was designing the “object store”, writing access methods to the object
data.
All in C though (sorry C++ folks).
So, you had things like:
getObjectVisibility( id );
and
setObjectVisibility( id, status );
and used thus:
setObjectVisibility(id,!getObjectVisibility(id);
Of course, using different “id’s” means you can simply flip one object state based on another very easily!
Mickey!
Not really. What I mean is that I have a generic texture store, and then can assign a set of textures to objects that need texturing, and then can select textures from this list. Either as, “current texture”, or cycle through the list (changing at “n” frames, default == 1), and then either restart or bounce. It sort of looks like:
beginWork();
addObjectToWork( id1 );
addObjectToWork( id2 );
…
…
addObjectToWork( control1 ); /*e.g.depth off */
addObjectToWork( id1 );
endWork();
object id1 and object id2 though could have the same render function (i.e. a house),
but have different texture lists!
In my context I only needed to use quite simple objects … some 2D and some 3D. Of course movement was in 3D.
However, I see no reason why the scheme would not work by simply writing another render function which rendered a more complex model.
If this model needed more careful placing and asignment of textures, then some new code would be needed to control this (definitely!).
For my application, it worked fine, and allowed complex scenes and graphics to be done easily.
Another feature of each object was an “event schedule list” as well, so you could e.g. attach an event to an object to occur “n frames” later. All object manipulations were covered by an event code. However, you could schedule an event (or series of events!) on one object to control another. The generic scheduler also passed parameters as well, so you could e.g. fire off an event on one object which passed its own colour data to another object so it then was rendered in the same colour as the first object (and then extend this notion to all other object attributes). Of course, you could hide events on “invisible objects”! If (for some reason) we missed an event “time”, it was popped of the event stack for that object.
A special event type was the ability to call up external routines bound into the system to do something the engine and support environment didn’t provide! Although, this was not used much in the end as these routines generally made it into a revised spec (ye olde engine rewrite, again, again!)
Sorry for the long post. But I’m happy to share ideas which someone may find useful.
The system was not built for speed, only decent smooth animation. Not for games!
But, was it art??? Who knows.
Rob.