Games Industry Programming.

Ok, this question is more or less directed at people actually in theh games industry.
When you are writing the graphics routines for a game or whatever, do you generally work with a wrapped type library thus not directly interfacing with the 3D API D3D or OpenGL, or do you actually work with the underlying API. The answer will prolly be different depending on teh developer…
If you dont understand me, do you use the following:
unsignthe actual API: glBegin(…);
or a wrapper: quakeDrawTriangles(…);
where quakeDrawTriangles is an in house type wrapper.

Depends on if there is anything to gain by wrapping things, and what you mean by wrapping.

For example, if you are writing a game that can use both OpenGL and DirectX, then you might wrap the calls so the rest of the code doesn’t care.

OpenGL doesn’t handle scene management, so you have to write a scene manager, which talks directly to OpenGL at some level. Most of the code goes through the scene manager, though.

If you are writing for multiple platforms, you must wrap the graphics because the APIs are different on different platforms (unless they all support OpenGL, of course), and you don’t want to have to rewrite the game for every platform.

alot of people, who do rendering engines for games, find themselves making the actual renderer a layer, the occlusion/visibility coding another layer (or the same), and an interface to that all (API). it’s more than quakeDrawTriangles() but less than glBegin() (although that too is supported). the reason is, an engine can handle animation, rendering,m etc., all by itself, and be portable, and fast, without being tangled in tons of game logic too

Genrally cohesion is a good quality to strive after in software development. Each component (or class or module or whatever you want to call it) should do one clearly defined thing, and nothing else. The GUI code should be pure presentation, not mixed up in the application logic etc. So havng a renderer class that abstracts away OpenGL is a good thing. However, another good thing is that responsibility of doing something should fall on the class which has the relevant information. Which means objects should draw themselves, bad cohesion! :slight_smile: You ususally end up with some sort of compromise like having a general MeshInstance referenced from each game object and drawn by the renderer.

Shoe… well, I’m working for a German game developing firm since some years now… and well, it works following way:

You first of all either buy or build yourself a graphics interface. Here in firm we have actually two ones, a bought one which is D3D only and an own one for OpenGL and D3D.

In the code of the game itself you normally NEVER use any direct API calls. This would be unclean and majorly it would make it impossible to ever “switch” to another, better engine.

So it would be like
AphGlVertex3f(1.f,1.f,1.f) where by AphGlVertex3f would do exactly the same as glVertex3f. But well, of course now days I would even in my worst nightmares not use any of these commands (glBegin, glVertex…) anymore. We have our own, API independent vertex buffer, an own index buffer and so on. And for each API (Direct3D, OpenGL HW T&L, OpenGL SW T&L…) we have an interface class. Each of these classes is an inheritance of C09Base3DInterface just for example. We take care of that all this class offers is supported by all APIs. And for things which are very hardware dependent… vertex shaders for example… we write special work arounds then.

The most firms buy an API interface and extend it then themselves. Others buy a complete package and just use it as it is. We a long time used a bought one too, but now more and more slide over to our own engine. That really depends heavily on the firm, but Direct3D and OpenGL knowledge is really VERY valueful, if you want to start working in a professional firm. Majorly Direct3D of course, but there are really still many firms which are OpenGL only. So all in all you have to learn both APIs anyway .

BlackJack