Speed of Texturing vs Rendering

Does OpenGL draw a textured polygon faster than a rendered polygon? I’ve been fooling around with rendered polygons but my frame rate is dismal (3 - 12 FPS). I know this is partly because its a P/300 without an OpenGL card, but I’ve seen a number of games etc give better frame rates with greater scene complexity.

Which bring me to another point. Is DirectX faster than OpenGL, or does it depend on Code. I would absolutely hate to have to switch to DirectX and lose cross platform compatibility etc.

I think texturing is always faster than rendering so whenever we need speed it better to go for texturing. - Pors

Originally posted by Furrage:
[b]Does OpenGL draw a textured polygon faster than a rendered polygon? I’ve been fooling around with rendered polygons but my frame rate is dismal (3 - 12 FPS). I know this is partly because its a P/300 without an OpenGL card, but I’ve seen a number of games etc give better frame rates with greater scene complexity.

Which bring me to another point. Is DirectX faster than OpenGL, or does it depend on Code. I would absolutely hate to have to switch to DirectX and lose cross platform compatibility etc. [/b]

Re software rasterizers: why even think about that? Buy some cheap 3d accelerator and include that in your target specs as well.

As for texture application (in hardware), it consumes a little bandwith but apart from that single texturing is actually free. At least I couldn’t produce a scenario where the performance difference could be measured with very small textures.

Is DirectX faster than OpenGL, or does it depend on Code

It dependes both on your code and on your driver. The API itself has almost no impact on the performance at all. A bad written OpenGL app can be way slowe than a good written Direct3D app, and vice versa.

And by the way. Please, do no compare OpenGL with DirectX. DirectX is a collection of components for 2D and 3D graphics, sound, music, input, networking, younameit. IF you need to compare, at least compare with Direct3D, and not the whole suit.

Originally posted by zeckensack:
Re software rasterizers: why even think about that? Buy some cheap 3d accelerator and include that in your target specs as well.

One problem my target market typically buys machines with integrated motherboards with non OpenGL compliant cards. So changing the target specs is out.

Originally posted by zeckensack:
As for texture application (in hardware), it consumes a little bandwith but apart from that single texturing is actually free. At least I couldn’t produce a scenario where the performance difference could be measured with very small textures.

Whoa! I’m not too verbose with texture jargon. Consumed bandwidth means slower? And what is the significance of single texturing. Or maybe I should read the redbook and some tutorials some more.

Originally posted by Bob:
And by the way. Please, do no compare OpenGL with DirectX. DirectX is a collection of components for 2D and 3D graphics, sound, music, input, networking, younameit. IF you need to compare, at least compare with Direct3D, and not the whole suit.

Sorry. Guess from a layman’s point of view they are the same. Anyway, I was refering to Direct3D.

Originally posted by Furrage:
One problem my target market typically buys machines with integrated motherboards with non OpenGL compliant cards. So changing the target specs is out.

If you’re talking about i810, that would be sufficient for basic texturing. Not blazingly fast nor versatile, but at least it works. If you’re talking about big iron (RageXL), then you’re right.

Whoa! I’m not too verbose with texture jargon. Consumed bandwidth means slower? And what is the significance of single texturing. Or maybe I should read the redbook and some tutorials some more.
Single texturing as opposed to multitexturing, ie you don’t blend several textures together for a final result. Extra bandwith consumption means slower, yes. If you use some of your memory bandwith for texture reads, you have less remaining for other stuff.
[sidenote]
What most people don’t take into account is that there’s basically a cap on the amount of texels (and associated bandwith) required for a scene. It’s (screen res)overdrawx, where x is 1 for point sampling, four for bilinear filtering and 8 for trilinear. In performance terms, this is very similar to the cost of going from 16bit to a 32bit color buffer.
[/sidenote]
If you have hardware acceleration at all, textures are not much of a performance issue, especially since even crappy chips include some kind of texture caching.

If you have to do strictly software rendering, texturing will come at a hefty performance cost. Particularly filtering will burn lots of cycles. You can improve speed by going to point sampled (unfiltered) textures but image quality may drop below acceptable levels.

After all, the truth is in the pudding, give it a try

Yesterday became really dissapointing for me when I loaded my models in a game I was creating and watched my frame rate get bombed to death. True I knew that I would get some slow down but this practically killed my will to code. I work a full time job, have a wife and responsibilities at home, and thus I typically have only 4 to 12 hours a week to do any programming. So when I spend a couple of weeks trying stuff and have it fall through at the end, it can be a severe set back. Both time wasted and the time it takes to learn something new and correct the problem.

Oh well, life goes on. Back to the drawing board and see what will happen next time.

The software OpenGL renderer is very slow. It’d be better for you to write your own, or, at the very least, try out Mesa.

Okay new question and hopefully not too far off topic. Most of the people I was building the game for have Mainboard type motherboards with Sis 6326 type video cards. I know it’s not fully OpenGL compliant, but it should support some features. How do I know which features it or any card supports. If so is there a simple way to access those features?

Also re pixel formats. The way I do it now is to try getting one format and if it fails use the default format. Are there tutorials on how to test for supported pixel formats and use the best one available?

In short I want the simplest way to use any hardware features available to speed up my frame rate. I’m also going to start researching culling and quad tree techniques to see what they can do as well.

For supported features, just check what glGetString returns when calling it with GL_EXTENSIONS and GL_VERSION. All features from the reported version must be supported (check the spec to see what’s supported), and all the extensions can be used also.

Just did a quick search on that chipset. You get single texturing, alpha blending and fog.

You won’t have hardware T&L so try to stay away from OpenGL lighting and maybe do some of the transformation work yourself, ie load an identity modelview matrix, the driver should be able to recognize that and optimize transformation away.

As for texturing, this is a low end chip with low memory sizes but you should be fine using small textures. Just don’t exceed your vidcard RAM limit. The original concept behind texturing is that it allows you to add detail without increasing geometry load. Take opportunity of that.

You need to follow some of the pre T&L optimization guidelines which I can’t recall completely right now, but for a start, aggresively manage geometry on your side, vertices are relatively expensive without transform hardware. Backface culling will make a difference. If you can manage it with your scenery, don’t clear buffers, instead draw backgrounds without depth testing enabled, this can save you some unnecessary memory bandwith consumption.

Originally posted by Bob:
check what glGetString returns when calling it with GL_EXTENSIONS and GL_VERSION. All features from the reported version must be supported

Not familiar with glGetString. I’ll do some reading tonight and see what I make of it.

Originally posted by zeckensack:
You won’t have hardware T&L so try to stay away from OpenGL lighting

T&L = Texturing and lighting. It’s not much of a problem cause I can always work around lighting (Thank God for those maths classes).

Originally posted by zeckensack:
ie load an identity modelview matrix, the driver should be able to recognize that and optimize transformation away.

That means skip the glTranslates, glRotates, etc? That I can do. I actually used to do that, then learnt about glTranslate, glRotate, etc. By the way, are matrix multiplications done in hardware or software? The way I used to code was slightly faster than matrix mults for single transforms, but it did not fit well into OpenGL’s matrix scheme.

Originally posted by zeckensack:
If you can manage it with your scenery, don’t clear buffers, instead draw backgrounds without depth testing enabled

That shouldn’t be a problem either. I was actually wondering if that could be done, and I could skip the glClear(GL_COLOR_BIT | GL_DEPTH_BIT). I figured it could since I draw background to the entire screen.

Err… Any answers to the pixelformat question?

Originally posted by Furrage:
Also re pixel formats. The way I do it now is to try getting one format and if it fails use the default format. Are there tutorials on how to test for supported pixel formats and use the best one available?

T&L = transform and lighting,
hardware T&L = transforming and lighting is done on the GPU.

-Lev

Thanks for all the help everyone. I’ll see if I can lick this FPS problem using the various suggestions. I really wish someone could answer the pixel format question, but I guess I’ll have to look elsewhere.