How is Quake so smooth and good??????

ok… this is mind boggling. Whenever I make an opengl app in win32 and just draw a few simple triangles and stuff (less than 1000 polygons), I get a horrendous frame rate of around 20 or less. Now, all these -real- games and stuff can draw a whole hell lot more polygons w/ texture maps and lighting and stuff at around 60-80 fps, with the same video mode. I got the WinQuake source code (which is old btw, and could still whip my code’s ass) and I saw that it did not really rely much on other API’s to do rendering and seems to do a lot of low level stuff, using som library called MGL or somthin.
So… to actually make a smooth running app, what sort of exotic low-level optimization am I supposed to do??? I think I am taking the completely wrong approach to opengl right now but can’t figure out what. Am I supposed to do some weird thing when setting up video modes or am I actually supposed to toggle bits in the video memory itself or what? How do all you -real- programmers do it?

Games like quake don’t display many polygons in a single frame - no matter what size the level is, at each frame only the polygons that can be seen are processed making the rendering very fast. Quake uses BSP trees, and a PVS (potentialy visible set) and are culling tricks which quickly cut the amount of drawing needed. Evenso you may be correct about your rendering speedhave you checked other peoples opengl demos?

Another thing to consider is that Quake3 has an optimized rendering pipeline…it essentially groups tri’s with the same texture, blend modes, etc…then renders them as a batch without doing any state changes between. Also, if you noclip out of the world, you’ll notice some really unusual visual artefacts. This happens because many of these games assume that you are completely rendering over the whole screen, and consequently, doing a clear of the color buffer is not necessary.

hmm… so in a 3d game, usually how many polygons are on screen at any time?

See, I’m not sure whether I should program the code to individually render polygons (as opposed to rendering an object i.e. Display List) I wonder which one is actually more efficient.

Anyway, should I just be doing the standard stuff (getting an RC for a window, doing this other WGL stuff bla bla bla) or is there some low-level trick out there for real optimization?

Finally, what is the most efficient way of changing video modes? (My regular screen is 1024x768x24bit but I have a stupid ATI Rage that can’t do anything so I would like to test my progs in 640x480x16bit and I have noticed a major difference in performance when resolutions are changed so this is important to me)

There is no trick to getting a suitable RC. As long as you choose one that uses your video card’s OpenGL implementation rather than the Microsoft generic implementation.

To change DisplayModes I use the following code:

BOOL FullScreen(int width, int height, int bpp)
{
DEVMODE devmode;
devmode.dmBitsPerPel=0;
BOOL modeswitch = FALSE;
LONG result;

bFullscreen=FALSE;

UINT i=0;
do
{
modeswitch = EnumDisplaySettings(NULL, i, &devmode);
i++;
}
while(((devmode.dmBitsPerPel!=(DWORD)bpp) | |
(devmode.dmPelsWidth !=(DWORD)width) | |
(devmode.dmPelsHeight != (DWORD)height)) &&
(i<50) );

if(!modeswitch)
{
return FALSE;
}
else
{
result = ChangeDisplaySettings(&devmode, 0);
if(result!=DISP_CHANGE_SUCCESSFUL)
{
//Might be running in Windows95, let’s try without the hertz change
devmode.dmBitsPerPel = (DWORD)bpp;
devmode.dmPelsWidth = (DWORD)width;
devmode.dmPelsHeight = (DWORD)height;
devmode.dmFields = DM_BITSPERPEL | DM_PELSWIDTH | DM_PELSHEIGHT;
result = ChangeDisplaySettings(&devmode, 0);
if(result!=DISP_CHANGE_SUCCESSFUL)
{
return FALSE;
}
}
}
return TRUE;
}

hmm… I think that’s a problem there. My program (I know for a fact using

const GLubyte *vendor = glGetString(GL_VENDOR);
const GLubyte *renderer = glGetString(GL_RENDERER);

it tells me Microsoft and GDI Generic. Should I do something to specifically use my hardware acceleration or, as I had previously understood it, opengl32.dll automatically uses my hardware, or is it just that ATI Rage does not actually accelerate ogl?

According to your earlier posts, it clearly appears your ATI card does accelerate OpenGL as witnessed by the good framerate in Quake.
It’s all about choosing a proper pixelformat in your case. You need to test or rather enumerate each suitable pixelformat and see what OpenGL implementation becomes current for each pixelformat. Then you simply choose a pixelformat that allowed the ATI OpenGL implementation to be used. You can let your app do this tedious process one time and store the result. Then only repeat when starting up another time if the stored strings don’t match up to the current vendor and version strings, or if the stored pixelformat database is missing or corrupt, or if the user has requested the enumeration.

Originally posted by wannabe |-|4x0r:
[b]hmm… I think that’s a problem there. My program (I know for a fact using

const GLubyte *vendor = glGetString(GL_VENDOR);
const GLubyte *renderer = glGetString(GL_RENDERER);

it tells me Microsoft and GDI Generic. Should I do something to specifically use my hardware acceleration or, as I had previously understood it, opengl32.dll automatically uses my hardware, or is it just that ATI Rage does not actually accelerate ogl?[/b]

Hi,
I’m new to this but if you run your app in full screen mode and then call your commands:

>> const GLubyte *vendor = glGetString(GL_VENDOR);

>> const GLubyte *renderer = glGetString(GL_RENDERER);

then it should automatically switch to hw accel mode and you should see different results instead of Microsoft and GDI Generic.

hope that helps,
-Quan

DFrey is right on the money, as usual.

Quan, there is nothing “automatic” about switching to hardware acceleration. It’s very intentional, I assure you; fullscreen or not. The app is only running “fullscreen” because the desktop resolution is resized to match that of the rendering view (except in some cases where the app queries the desktop resolution and creates its view to match the current registry values, but that’s neither here nor there).

Wannabe |-|4x0r, check for the latest drivers from http://www.ati.com to see if you can fix your lack of hardware acceleration. AFAIK, that chipset should be ready to rock. And, to answer your question about doing the “low level” stuff such as wgl calls, yes. Yes, absolutely. I avoided learning MFC and pixel format selection for weeks while I clung to glut and aux. And now I can’t believe I ever tried creating a GL app without specifying (and knowing) what my DC/RC and PFD were.

Just my 2 cents.

Glossifah

oops… thanks for correcting me Glossifah! looks like i still have a lot more reading to do.
thanx.

Glad you didn’t take that as a flame, it wasn’t my intention.

Glossifah

A few facts I have read about the Quake3 engine:
-it uses no display lists (all inmediate mode)
-it fully uses glDrawElements/glDrawArrays to optimize triangle throughput
-It renders something arround 10K triangles per frame.
-it uses a lot of extensions (if available) like multitexturing, compiled vertex arrays, etc, that optimize even more the code.
-it uses potencially visible surfaces, in order to not-draw parts of the world that are behind walls and stuff.
-it uses level of detail to reduce detail in items and bezier surfaces in the distance
-it does not use opengl lighting pipeline. world lighting is precalculated into lightmap textures.
-it was written by one of the most experienced and talented group of programmers in its filed (doing it since wolfstein).

hello, i just want to complete coco’s message by saying that “it was written by one of the most experienced and talented group of programmers in its filed (doing it since wolfstein” …wich has got the help of Mickael “god” Abrash when quake 1 was in the devellopment stage.
Only to not forget that John.C is not the only one programmer that does cool things, some other ones in others game teams are good too.

Hello!

Wannabe |-|4x0r, you must set your desktop bitdepth to 16 bits if you want your Ati Rage Pro to accelerate Open GL, and remember not to have a resolution higher than 800x600 if you have a 4 MB board.

Osku

[This message has been edited by Osku (edited 01-08-2001).]

I’m a beginner myself, but I wanted to say thank you for asking this question. It’s a good one, and you saved me having to ask it myself.

Osku, while it may be the case that his video card needs to conform to that particular color depth and resolution to allow accelerated OpenGL, the average person will not know this. Therefore, it only makes sense to enumerate the pixelformats (and associated render context) so your program sees which ones it should use. This way makes your program compatible with more hardware, and relieves the user of any special knowledge.

[This message has been edited by DFrey (edited 01-08-2001).]

Hello!

Yes you are right DFrey, but wannabe |-|4x0r said that he usually has his desktop bitdepth at 24 bits and he noticed a major speedup when he changed to 16 bits. That was what I meant. You should of course always check what pixelformats are supported in hardware…