Renderer design

I am building a simple renderer for my engine, so all rendering code lies there. I do not want to get into scenegraphs and such, I am trying to make an easy to build but fast renderer. This is how I am thinking of it:

The Renderer class, where you can pass an array of vertices for use with the vertex array. I will be able to specify NormalPointer, TexcoordPointers, VertexPointers, etc. A function called Render would draw the vertices. It sounds simple and powerful to me.

The mesh class will have an instance of the renderer, just to keep it simple. For example, I load the model, I put all the vertices into one big array, and pass that array to the Renderer class. when I need to render my mesh, I use renderer.Render();

WOuld this work? Cause even if it is simple, it needs lots of work and studying before I have something working. Any suggestions that would make it better?
What I am trying to do here is avoid something that my previous engine had. Lots of glVertex3f calls (and DL, ok, but no VA), they were everywhere in the engine! Im trying to make it more compact… Yopu get the idea.

Hi.

In my experience it has proven much easier to have models wrapped in their respective classes with a Render() method. I also implemented a PreRender() method to draw any model-specific drawlists. That way you handle models as individual objects and all your engine has to do is to sort objects if needed and draw them one by one.

However, I always tried to have the models in one bulk memory space to gain on memory caching. Therefore all object data should be, by my oppinion, placed in a single structure with methods to manipulate the data.

Sorting as to what to draw first is a matter of scene complexity. I often tried to render groups of objects that are sorted by their respective lights and within the group I would render a subgroup of objects with same materials, thus having material changing calls as few as possible.

Also, by inheriting certain classes you can always perform model-specific methods. Bear in mind that often you will need this as generic object handling is not that efficient.

So you suggest I keep on putting rendering code here and there?

No, you should try to get your head around the concept of ‘shaders’. For every object in your scene, you should associate it with a particular ‘shader’. A shader is a ‘method’ of drawing some geometry (examples: bump mapped, environment mapped, cartoon style, furry, or any weird effect you want).

After you’ve ascertained which objects are visible, by whatever culling method you choose, you’d do something like this:-

for every visible object…
{
ascertain which lights influence the object, and add the light references to the object.

add ‘this’ object to its shader IN PLACE (ie. use a linked list to insert the object based on its distancefromeye followed by texture(s) id, followed by material.
}

then…

for every shader in the system…
{
set shaders render states

for every object in shaders linked list…
{
shader->render(object)
}
}

The shader->render method should deal with the lights. If you’re using per-pixel lighting, then changing lights involves nothing more than changing a couple of constants.

This is the most ‘future’ proof design - in other words, you won’t have to do much rework when technologies change. It also gives you a lot of freedom to change your mind.

P.S. I’ve no idea why this damned forum randomly selects huge widths for pages - what does this width calculation depend on?

[This message has been edited by knackered (edited 11-19-2002).]

Ok, I will try to play with that.

About the width… hmm, no clue!!

Originally posted by knackered:
P.S. I’ve no idea why this damned forum randomly selects huge widths for pages - what does this width calculation depend on?

it can’t do newlines in code tags, i think they use the pre statement of html so that
gets a newline automatigally… so don’t write that long lines…

btw, i have no clue why the code font size changes after 2 or 3 lines… i think thats some bug…

Hi.

Originally posted by knackered:
No, you should try to get your head around the concept of ‘shaders’. For every object in your scene, you should associate it with a particular ‘shader’.

I’m thinking about implementing similar (not exactly the same) renderer pipeline, but still I can’t start coding because of some problems with that method, so as you seem to solve them all before I have a few questions:

Say you load quake3 map (ignoring PVS and BSP, just geometry).

What would you understand under term “object” then (having in mind that each object has only one shader) - whole map or single face?

And how would you deal with stuff like lightmaps (shared between many faces, but faces with same shader can have different lightmaps)?

Or is that so that a SHADER is not quake3 like shader, but SHADER includes all quake3 map’s shaders and is being treaten as one shader?

What are your experiences about it?

Err, I don’t have any experience of quake maps. I’m loosely aware of their format, but never bothered with them.
A shader should contain a method of drawing something, whether it be a triangle or a list of triangles. How you organise these triangles is up to you, but when it comes to rendering a list of triangles you simply call the render method of the associated shader, with parameters such as texture ID’s, streams (vertex,normal,texcoord0,texcoord1, indices etc.), materials, constant values (say, a time constant which may get used by a vertex shader to deform the vertices). You don’t store this information in the shader itself, you pass this information in as parameters.

So to test if I get it right here’s my summary:
(with slight modifications to what’s object and model)

We have models, objects and shaders.

Models contain all geometry - verts, faces, textures.

Objects have pointers to their models (so we can have many objects sharing same model in our scene).
Object has pointer to shader, which is applied to its model (so many objects, having same models can be rendered different way).

Shaders are objects. They have functions like
PushToBuffer(object) and
Flush().

After whole scene is “pushed” to shaders’ buffers (testing visibility etc somehow) we could just go through
all shaders (in certain order) calling Flush() for them.

Does it look reasonably?

[This message has been edited by MickeyMouse (edited 11-25-2002).]

Looks good to me. Go for it!

Talking from my own experience…

Having external shaders is not a good solution. To have maximum flexibility and inheritability for the models, wrap them in a C++ class or structure. Then simply for each object do:

ObjectList[i]->Render();

The Render(); method should decided wether to draw the model or not, and HOW do draw it. That way the code is more readable.
Again, this way approves for INHERITANCE of behaviour.

For example, let’s say you have a TSphere class that renders a sphere. You could also
have TRubberSphere and TMetalSphere, all inheriting the Render() of their base class.
BUT, you can have their own rendering methods which change material and light parameters like this:

TMetalSphere::Render() {
SetUpMetallicMaterial();
TSphere::Render();
}

Share the code among models. Have as few calls as possible. Let models decide their own drawing methods and conditions.

You could also setup an Object list class:

TObjectList {

public:
SortFrontToBack();
SortBackToFront();
DrawScene();
PredrawObjects();
TurnLights(int l, bool v);

}

In the end, you’ll end up with a less simple implementation than that.

A “shader” is actually a collection of some number of texture maps, some set of input data/parameters, some set of necessary data streams (binormals, anisotropy vectors, whatnot), some number of vertex/fragment programs, and some set of rendering strategies (multi-pass, degradation, etc).

Unfortunately, light interact with the shaders; at a minimum, a shader needs to know whether it’s supposed to draw where there’s stencil information or not.

The we get into the whole sorted-transparency thing, where sorting by distance is more important than sorting by shader.

Then there’s data that drives the shader inputs: animations to determine bone matrices; particle system iterated functions to determine color, orientation and texture coordinates; etc.

More concretely, to tie back to your initial question: when the goal is “fast but simple to implement” then that’s not so hard. Just ignore all of the neat functions, and just support diffuse texture + vertex lighting (and/or shadow maps). Speed is mostly just about buffer management, and making sure you submit your data to the hardware in the optimal format. Typically, the underlying renderer (D3D, ATIGL, NVGL or whatnot) would be responsible for buffer management for optimal performance.

Given your initial suggestion, you’d then allocate buffers for vertex data and index data off your renderer, and push data into these buffers. You would then submit these buffers to the renderer, along with state information for shading. This could be another object, like a Shader, that knows how to configure the renderer, or that the renderer knows how to configure itself from. The shader, in turn, could also be allocated off the renderer, and configured by the client (app).

You’ll end up with one big renderer, which is both factory and machine dealing with smaller objects encapsulating different kinds of state. You can then write model loaders (etc) as strategies that know how to take a renderer (or object allocated by the renderer) and configure it according to the data file you throw at it.

I believe there are several books about just this kind of design work. Hard to figure out whether any of them don’t suck, though :slight_smile:

Please note: I left game object state management out, and collision/physics, and all other of those non-graphics related subsystems that will also have tendrils into your game objects and probably share some information with the basic meshes.

VladK,

Inheritance of behavior has pretty much been shown to be a mistake in most modern programming literature.

If I wanted a metal sphere and a rubber sphere, I’d apply a Metal shader to a Sphere geometry, or a Rubber shader to a Sphere geometry. That way, I don’t have to sub-class to get different kinds of effects on the same geometry, and I also don’t have to copy/paste code, or use multiple inheritance, to create a Metal Box or a Metal Dodecahedron.

If we want 10 different geometries, and 10 different materials, then to get all different combinations, the system I suggest requires that you write 20 implementations; the system you suggest requires that you write 100 implementations.

Of course, I’d probably just write a TriMesh object and let 3dsmax deal with whether it’s a pyramid, sphere or a llama :slight_smile: RenderMonkey is starting to let me do the same thing for shading information, although it’s not quite mature yet.

You also don’t need an object list class; the regular STL vector<> will do sorting just fine using any predicate you care to construct.

Originally posted by vladk:
[b]Talking from my own experience…

Having external shaders is not a good solution. To have maximum flexibility and inheritability for the models, wrap them in a C++ class or structure. Then simply for each object do:

ObjectList[i]->Render();
[/b]

This is an out of date approach. As jwatte said, it only results in copy/paste code, and heavy use of virtuals, which is only going to slow you down.
Couple that with your inability to query the state changes a particular render method makes, and you end up with a hugely inefficient system. It does not make more readable code, either - having your actual rendering code sprawled out over numerous methods in numerous classes does not make readable code, which will slow down your debugging. However, it makes sense to have methods such as Collide() overridden for the different primitives, because detecting collisions between different primitives can be specialised for speed. But a mesh is just a mesh, regardless of whether its a sphere or a horse - you can specialise the generation of a sphere mesh, by using tristrips or whatever, but the rendering of that mesh is totally generic.

In an external script type shader coupled with proper state management in your renderer class, you can simply print out the state changes that are happening along with the name of the object & shader that is causing the state changes…you can then fix bugs at runtime.
Take a look at the DX fx files (or CgFX files) to see how nice shaders can be. All these files can be compiled down to fast state change flags at runtime.

jwatte, I don’t believe that storing textures in the shader is the best way of going about things - you simply need to store a slot name (such as diffusetex)…and leave it up to the geometry itself to supply the textures that fit into those slots.
e.g.
texbind 0 <diffusetex>
texbind 1 <bumptex>

[This message has been edited by knackered (edited 11-26-2002).]

So how are you guys handling shaders that are multipass?

I guess the Shader->Render() function could take care of the multipass by itself. I think there might be some gotcha I’m missing in this architecture though.

ObjectList[i]->Render();

The Render(); method should decided wether to draw the model or not, and HOW do draw it. That way the code is more readable.
Again, this way approves for INHERITANCE of behaviour.

I don’t think I like that solution. I prefer to keep my culling and rendering seperate. The culler should worry about culling. The renderer should worry about rendering.

My culler decides what is visible and passes it to the renderer. The renderer decides which bin to place the renderable object. The renderer walks through the bins when a Flush() is called. The bins are actually a vector<CShader> and each one of these bins has a vector<CRenderable>.

My culler also decides which light is important for each renderable. I’m not sure if I completely like this design.

My dynamic objects are somewhat of a hack. How are you guys dealing with skinned meshes, skeletons, md2 interpolation, etc?

Hi again.

It’s pretty difficult to make all your objects being rendered, culled, skinned and
boned (is that english?) same way.

I’d say it quite much depends on what kind of application you’re going to have.
In games the highest priority is not this or that objective oriented ideology, but speed
(sure except good-looking).
In serious games you won’t probably have problems with 100 different ways of rendering spheres.

In fpp shoters etc. you have specific type of culling, you have BSP-s, PVS-es, Beam trees. You have to make some exceptions in those cases or it will be very inefficient.

Originally posted by PK:
My dynamic objects are somewhat of a hack. How are you guys dealing with skinned meshes, skeletons, md2 interpolation, etc?

Maybe the solution is to invent own unified model format, that will handle “every” case.
I did a little class which loads MD2 as well as MD3 and stores them in same arrays (although they’re quite different).
At present this model class has its own Render() (this will probably change soon) and Animate() methods.
Whether there’s only one bone or there’re many it doesn’t matter.

Originally posted by knackered:
No, you should try to get your head around the concept of ‘shaders’. For every object in your scene, you should associate it with a particular ‘shader’. A shader is a ‘method’ of drawing some geometry (examples: bump mapped, environment mapped, cartoon style, furry, or any weird effect you want).

OK, but say Ive got a cube and two shaders - one renders geometry with bumpmaping, second - envirnoment mapping. Where do I put required texture objects? Does geometry must to predict possible shaders and include all these textures?

Pozdr.

heres what i do (simplified)

struct shader
{
int *materialIDs;
int num_maTERIALS;
}

struct mesh
{
verts texcoords1+2 etc
}

struct Material_and_meshes
{
int materialID;
mesh **meshes;
}

note i never use shaders ingame (but materials (which is a rendering pass))

PLUSES: heaps
MINUSES: dont think any (if there are i would like to hear)

MichaelK> Does geometry must to predict possible shaders and include all these textures?

Geometry should not come with textures. At work, geometry comes coupled with default materials, and materials have textures; at home, I just keep them entirely separate and let the configuration of the engine object which needs the geometry, also configure what material to use with the geometry.

However, geometry MAY need to include every possible vertex stream. This is especially true for things like tangent spaces, fur shells, anisotropy coefficients, etc. Once you get a sufficiently advanced system, you’ll have to do run-time checking, and display an error saying “you configured a fur material but there is no fur_shell sub-mesh” if a wrong match is made. If you have a material selection UI, then you should just dim out shaders that require geometry streams that aren’t available.

knackered> jwatte, I don’t believe that storing textures in the shader is the best way of going about things

When I say “shader” I really mean “material” which consists of references to textures as well as references to vertex and pixel programs. Shaders are applied to geometry to generate pixels on screen. Also, different pieces of the system I describe exist in at-work code and at-home code; I’m not at that perfect nirvana in either place yet (and doubt I’ll ever be :slight_smile:

An object can pick meshes, and pick material settings for each sub-mesh. I e, in a configuration file it might look like (pseudo-code):

mesh {
file basichuman
materials {
hair {
fragmentprogram hair2.fp
diffuse redhair.dds
anisotropy strands.tga
}
skin {
color #e0d4aa
modulate freckles.dds
specular freckles.dds
}
trousers {
bump coarse_jeans.tga
color red_tab.dds
}

}
}

Where the material names and default properties are specific to the mesh (but if you have good production tools, they’ll be consistently named). The actual parameter names would be parameters to the shaders. Yes, this involves actually doing linking (resolution) of materials, textures and meshes at load time.

Actually, at work, we build meshes by aggregating and parametizing skinnable meshes, which gets hairy quickly when you need to make sure they all match up and lod together reasonably and all that – nothing is ever simple :slight_smile:

To deal with deformable meshes, there’s a prerender step which gets called on everything that’s going to be rendered; that’s the ideal time to form your pose for this time step, etc. Then you just re-submit the geometry for each render pass that needs it.

Oh, and this set-up STILL isn’t actually complete enough, as there are some bits and pieces that can’t be fully data driven, such as “render strategy” – used to make transparency render far-to-close and all that. For now, that’s all hard-coded to a few specific strategies. It’s un-clear to me how to actually make it data driven, unless you consider full-out scripts to be “data” and are prepared to take that hit. I don’t, and I’m not.

Oh, and if you have the luxury of working with good artists, they really deserve in-window fold-out property inspectors with pop-up menus, spinner wheels, sliders and drag-and-drop browsers for these things, rather than editing some text file with NOTEPAD.EXE :slight_smile: