OpenGL performance

Hi,

I have a program in which I’m rendering a room (walls, ceiling and floor) with mapped textures. On my computer (AMD 500Mhz, GeForce2 GTS) program speed is 85 fps (max vsync - 85 Hz), but if I run my program on Celeron 800 Mhz and Intel 82815 graphics chipset it has got from 15 to 35 fps! Why so slowly, I know that graphics operations are solved by GPU but if I run Quake 3 Arena on that second computer it runs even 50 fps!
I’m using no GL extensions, no display lists but I render only a few polygons…

Hello glYaro,

I don’t know if this will help, but if you would like to view the Intel Graphics Gaming Problems, click here

Edit: I have done a little bit more research, click here

Please let me know if this helps, if not, I will look more into this matter.

  • VC6-OGL

[This message has been edited by VC6-OGL (edited 01-31-2003).]

You may be in software when you are using Intel, try calling glGetString(GL_VENDOR) and glGetString(GL_RENDERER). If you are in software expect to see “Microsoft” and “GDI” or similar.

Steve

You should definitely be able to get more than 15-35 fps on an i815 with only a few polygons.

soconnor’s suggestion to make sure you’re not getting a generic pixel format is a good one. If you get a generic pixel format you’re program is using the Microsoft OpenGL driver, not the Intel OpenGL driver. The Microsoft driver doesn’t use any hardware acceleration, so your performance will be terrible.

Can you send some more info about your app? Are you requesting a stencil buffer? Are you using any weird blending or texenv modes?

– Ben

There might be more than one reasons for the slow going of your program.If you’re doing a lot of equations every frame.Some error might be that problem too.He can’t run his application on generic implementations cause Quake III will not run with over 50 fps on that system.The problem is somewhere else.Post some code!!!

I think its pretty obvious. You are using one or more OpenGL lights, right?

a.) the i815 only accelerates 16Bit modes
b.) the i815 lacks fillrate
c.) last but not least the i815 has no TnL

Use a timer in sections of your code so you can see where bottlenecks are.

I throttle the fps to 100 incase vsync is disabled because in the simplest scenes, the frame rate exceeds 500 fps on my Geforce 4 Ti 4200. This is a waste of CPU and GPU processing power not to mention causing unwanted heat

This timer uses winmm.lib and is accurate to 1 millisecond. I could use queryperformancecounter for microsecond precision but it has issues on certain chipsets causing it to jump 1 or 2 seconds under certain loads. In any case I don’t need it to test fps. I store the true time to render the frame in renderTime so I can display the true fps if it gets throttled to 100fps. So display renderTime in your app to see how long it takes to render the frame.

Below is a snippet from my program:

#pragma comment (lib, “Winmm.lib”)
#include <windows.h>

//constants
const float timeScale=0.001f
const BYTE throttle = 10; // 10 ms = 100fps limit

//variables
DWORD startTime=0, stopTime=0;
double elapsedTime=0, renderTime=0;

//main loop
timeBeginPeriod(1); // ensures millisecond precsion on NT/2000/XP systems
startTime=timeGetTime();
ProcessUserInput(); // Input and Physics
frames++;
DrawGLScene();
SwapBuffers(hDC);
//Sleep(9);
stopTime=timeGetTime();
elapsedTime=stopTime - startTime;
renderTime=elapsedTime; // get true render time in case of need for throttle

				if (throttle &gt; elapsedTime)		// Keeps frame rate below 100fps
				{								// when vysnc is disabled
					
					Sleep(throttle-elapsedTime);
					elapsedTime=throttle * timeScale; // convert to seconds
				}
				

				updateFps( renderTime*timeScale);

timeEndPeriod(1); // you must use this to restore the timer when you are done!

Hi,

My program is very simple:
no complicated equations, 1 light source,
while(1) loop with RenderScene() inside.

But I think I found a problem I set polygon mode to glPolygonMode(GL_FRONT_AND_BACK, GL_LINE) and I had 80 FPS!!! I checked the texture files one of them I’m using only is 1024x1024 bmp file !!! I think that texture slows down my program but I’m not sure that it is only this one reason. I’m sure that the problem is in texture mapping. I should experiment with texture parameters and environment. Maybe something is wrong with loading a texture and later it slows down my program? I don’t know that’s why I’m placing my loading code, If somebody can tell that is that something wrong I’ll be grateful.

AUX_RGBImageRec* LoadBMP(char *path)
{
FILE *file = NULL;
char errInfo[100];
file = fopen(path,“r”);
if (!path)
{
strcpy(errInfo,"Nie mozna odnalezc pliku: ");
strcat(errInfo, path);
MessageBox(NULL, errInfo,“Blad”, MB_OK);
return NULL;
}

// if file exists
if (file)
{
fclose(file);
return auxDIBImageLoad(path);
}
strcpy(errInfo,"Nie mozna zaladowac pliku: ");
strcat(errInfo, path);
MessageBox(NULL,errInfo,“Blad”,MB_OK);
return NULL;
}

// loads a texture BMP
AUX_RGBImageRec* LoadTexBMP(char *FileName)
{
char path[100] ;
strcpy(path, TEX_PATH);
strcat(path, FileName);
return LoadBMP(path);
}

void LoadTextures()
{

AUX_RGBImageRec* TextureImage[MAX_TEXTURES];
for (int i=0; i < MAX_TEXTURES; i++) TextureImage[i] = NULL;
memset(TextureImage,0,sizeof(void *)*MAX_TEXTURES);
TextureImage[TEX_FLOOR_OUT] = LoadTexBMP(“floor001.bmp”);
TextureImage[TEX_FLOOR_IN] = LoadTexBMP(“floor01.bmp”);
TextureImage[TEX_CEIL_OUT] = LoadTexBMP(“ceil02.bmp”);
TextureImage[TEX_WALL_OUT] = LoadTexBMP(“wall02.bmp”);
TextureImage[TEX_WIN] = LoadTexBMP(“win01.bmp”);

// load fonts
TextureImage[TEX_FONT_01] = LoadFontBMP(“font01.bmp”);
glGenTextures(MAX_TEXTURES, &textureArray[0]);

for (i=0; i<MAX_TEXTURES; i++)
// this is a guard
// if any texture pointer doesn’t exists this
// will protect the program before crash
if (TextureImage[i])
{
glBindTexture(GL_TEXTURE_2D, textureArray[i]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[i]->sizeX, TextureImage[i]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[i]->data);
}

for (i=0; i<MAX_TEXTURES; i++)
{
if (TextureImage[i])
{
if (TextureImage[i]->data)
free(TextureImage[i]->data);
free(TextureImage[i]);
}
}
}

Before even looking at your texture mapping code can you verify that you’re program is using a hardware accelerated pixel format?

Just do this:

const GLubyte *vendor;
vendor = glGetString(GL_VENDOR);
printf("Vendor = %s
", vendor);

If the vendor is “Intel”, you’re using a hardware accelerated pixel format. I suspect, however, that the vendor will be “Microsoft Corporation”, when means that everything is drawn with the Microsoft software renderer.

– Ben

And make sure that you enables mipmapping when using such large images as 1024*1024

Hi,

I’ve checked the vendor, it’s Intel.
Now I’m sure that it’s texture mapping.
I used small texture (128x128) instead of that big (1024x1024) and program runs about 50 FPS.

How to enable mipmapping? Do I have to set it with glTexParameter? And Does texture file width and height must be the power of 2 (16x16,128x128, etc.) or I can for example read a texture wchich is 64x128 ?

thanks
yaro