Writing for different graphics cards

I’ve written cyberloonies for my 3rd year project. It worked fine on my Voodoo 3 except for the fact that it ran too slowly. I’ve just found out that that it was because the maximum texture size it supports is 256x256 and one of my textures is 1024x256. Anyway, when I had to demonstrate it I was marked down because I had to run it on a GeForce 2 and although it ran very quickly, it had graphical glitches on all the moving objects. By that I mean when one object is in front of another and the front object is moving (never happens when its stationary), not every single polygon of the front object gets rendered to the screen. The polygons that don’t get rendered to the screen seem to be chosen randomly. Does anybody know why this is the case? Also can someone direct me to a site that gives all the limitations of every current graphics card as I want to get it working well on them all. What are the general rules of thumb when it comes to best texture mapping performance? For example I could split the 1024x256 texture into 4 256x256 textures but are there any graphics cards that display the former faster than the latter? Also are there any graphics cards that render a quad quicker than its equivalent 2 triangles or is it always faster to triangulate everything?

Originally posted by XoSkely10:
I’ve written cyberloonies for my 3rd year project. It worked fine on my Voodoo 3 except for the fact that it ran too slowly. I’ve just found out that that it was because the maximum texture size it supports is 256x256 and one of my textures is 1024x256. Anyway, when I had to demonstrate it I was marked down because I had to run it on a GeForce 2 and although it ran very quickly, it had graphical glitches on all the moving objects. By that I mean when one object is in front of another and the front object is moving (never happens when its stationary), not every single polygon of the front object gets rendered to the screen. The polygons that don’t get rendered to the screen seem to be chosen randomly. Does anybody know why this is the case? Also can someone direct me to a site that gives all the limitations of every current graphics card as I want to get it working well on them all. What are the general rules of thumb when it comes to best texture mapping performance? For example I could split the 1024x256 texture into 4 256x256 textures but are there any graphics cards that display the former faster than the latter? Also are there any graphics cards that render a quad quicker than its equivalent 2 triangles or is it always faster to triangulate everything?

Is GL_DEPTH_TEST enabled?

Oh I forgot about that. I did have it enabled but then suddenly I noticed that all I got was a blank (black) screen and after trying lots of things I found that bizarrely the depth test was causing it and there was nothing I could do to fix it other than turn the depth test off. That was when I was using my Voodoo 3. I suspect that it’ll work fine with the depth test on using my new GeForce 2, but it’s still a complete mystery to me why kept giving a blank screen when the depth test was on.

Were you clearing the depth buffer?

– Zeno

To deal with the problem of textures being too large, I just resize the image if necessary before uploading it:

int maxSize = 0;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxSize);

if (texWidth > maxSize)
newWidth = maxSize;
if (texHeight > maxSize)
newHeight = maxSize;
rescale(newWidth, newHeight);

> For example I could split the 1024x256 texture into 4 256x256 textures but are there any graphics cards that display the former faster than the latter?
If a card supports the bigger texture (most recent cards support up to 1024 * 1024 or more) then it will be faster to just use one quad (fewer vertices to tranform).

> Also are there any graphics cards that render a quad quicker than its equivalent 2 triangles?
1 quad is usually faster than 2 equivalent triangles because 1 quad = 4 vertices while 2 triangles = 6 vertices.