FPS Problem....

Hi,
I recently checked out the code in here dealing with FPS creation as well as the code in the OpenGL Super Bible that deals with the same topic. They mentioned going to the QueryPerformanceCounter and QueryPerformanceFrequency functions to get times… such as…

void init()
{
// other stuff
QueryPerformanceFrequency(&blah);
// other stuff
}

then…
void display()
{
QueryPerformanceCounter(&startblah);
render_scene();
glutswapbuffers();
QueryPerformanceCounter(&endblah);
fps = ((float)(endblah.QuadPart- startblah.QuadPart)/(float)blah.QuadPart);
}

so then every time glut calls display, it will find the fps per frame. Now when I print that value out, I get in the range of 10 frames per second, for a 6400 face screen, and 10 frames for a 3200 face screen, full screen or no… When I render a 12 face cube, I get in the realm of 290 frames per second…

I’m using a VooDoo Banshee, 16mb… I read someplace that the Voodoo cards didn’t like to accelerate unless it was in full screen mode, hence that test as well. I’m using the GLSetup program to get the opengl32.dll file I need, so I thought that it would be up to date?

I see a lot of animations at work along with their frame rates, and it just seems that my renderings are a lot smoother than the other 10 frame/sec rendering I’ve seen.

Am I calculcating the frame rate wrong? Am I not getting hardware acceleration? What’s going on here??

[This message has been edited by Richelieu (edited 09-24-2000).]

I dont know about the performance counter, I’ve never used it, I use GetTickCount(). I’m sure its very similar though and you can probably figure out how to switch it over. In any case, heres what I do:

DWORD startTime;
int framesDrawn;
int lastFps; // this will contain the FPS count which you can display

init()
{
startTime = GetTickCount();
framesDrawn = 0;
lastFps = 0;
}

display()
{
<render, render, render>

// update number of frames drawn since last update
framesDrawn++;

if( ((GetTickCount() - startTime) / 1000) &gt; 0 )
{
    // update FPS count
    lastFps = framesDrawn / ((GetTickCount() - startTime) / 1000);
    // set new start time for next update
    startTime = GetTickCount();
    // reset number of frames drawn
    framesDrawn = 0;
}

}

Originally posted by BwB:
I dont know about the performance counter, I’ve never used it, I use GetTickCount(). I’m sure its very similar though and you can probably figure out how to switch it over. In any case, heres what I do:

That’s what I was doing with my routine before, but noticed that it would only update when the display was called, and since that was a callback in GLUT, I could not be assured it would be called in a timely manner.

I’m starting to think that maybe I’m not getting the hardware acceleration that I thought I might have been getting. I did however get a rise of about 4 fps when I enabled culling. Not that exciting, but hey, it’s 4.

I know this gets into that realm of “how do you know when you are in hardware acceleration” but… how do you? I would think at 6400 faces, it should be going at a rate FASTER than 30… At least. Shouldn’t it?

If display() isnt being called enough, use glutPostRedisplay() (I think, I dont use glut). The method I posted above will give you the actual number of times your scene has been drawn in one second (or as close as it can get). I dont know how many polys a good card can handle though… I’ve never had more than a a thousnad or two inside the view frustum and its seems like they are taking way to long. I’ve been having quite a few problems with the “speed” issue. If you find anything out let me know

There appears to be something funny with your fps calculation Richelieu. You have:
fps = ((float)(endblah.QuadPart- startblah.QuadPart)/(float)blah.QuadPart);
which is basically (ticks per frame)/(ticks per second). And those units simply equal seconds per frame. What you need to do is end up with frames/second units.
And to do that you need to use:
fps = ((float)blah.QuadPart) / ((float)(endblah.QuadPart- startblah.QuadPart));
Where the difference endblah.QuadPart- startblah.QuadPart is the number of ticks per frame.

Or you could follow BwB’s example above, that appears to be fine and similiar to what I use myself. I have it optimized a bit by removing the division by 1000 and incrementing my frame counter by 1000 instead of 1. But I also have the fps checked every frame. If you instead check it every few frames, that optimization isn’t critical at all.

[This message has been edited by DFrey (edited 09-24-2000).]

Originally posted by DFrey:
[b]There appears to be something funny with your fps calculation Richelieu. You have:
fps = ((float)(endblah.QuadPart- startblah.QuadPart)/(float)blah.QuadPart);
which is basically (ticks per frame)/(ticks per second). And those units simply equal seconds per frame. What you need to do is end up with frames/second units.
And to do that you need to use:
fps = ((float)blah.QuadPart) / ((float)(endblah.QuadPart- startblah.QuadPart));
Where the difference endblah.QuadPart- startblah.QuadPart is the number of ticks per frame.

Or you could follow BwB’s example above, that appears to be fine and similiar to what I use myself. I have it optimized a bit by removing the division by 1000 and incrementing my frame counter by 1000 instead of 1. But I also have the fps checked every frame. If you instead check it every few frames, that optimization isn’t critical at all.

[This message has been edited by DFrey (edited 09-24-2000).][/b]

I actually tried dividing that whole value in fps by 1 (1 / (everything else)) and I got like .1. I thought that was RADICALLY wrong, so I kept it the other way. I’ll check it out, but in any event, if I get 15 fps or whatever close to that, isn’t that low for a 3D card and 6400 faces?

Originally posted by Richelieu:
I actually tried dividing that whole value in fps by 1 (1 / (everything else)) and I got like .1. I thought that was RADICALLY wrong, so I kept it the other way. I’ll check it out, but in any event, if I get 15 fps or whatever close to that, isn’t that low for a 3D card and 6400 faces?

I was just messing around with this tonight and I thought about something. I was using one of neHe’s tutorials (#6 in case anyone is curious) and noticed my 3dfx splash screen displayed when it ran. Which seems to make me think that is was doing hardware rendering. Now whenveer I use my program, I never get that splash screen. Could this be a hint that I’m only getting software rendering?

Or does GLUT window creation usually just not allow those type of things to happen? (the tutorial uses true window calls, I use GLUT) Any thoughts?

If you aren’t seeing the splash screen then that basically means one of 2 things, either OpenGL is not using the hardware acceleration, or the program has turned off the splash screen by setting the FX_GLIDE_NO_SPLASH
environment variable to 1 (or it is already set to 1).

[This message has been edited by DFrey (edited 09-26-2000).]

Originally posted by DFrey:
[b]If you aren’t seeing the splash screen then that basically means one of 2 things, either OpenGL is not using the hardware acceleration, or the program has turned off the splash screen by setting the FX_GLIDE_NO_SPLASH
environment variable to 1 (or it is already set to 1).

[This message has been edited by DFrey (edited 09-26-2000).][/b]

Ah ha! The plot thickens then. Ok, unless Visual C++ 5.0 turns that off without me knowing it (as the program I created was totally by me, so I know what variables I set and so forth) then I’m NOT using Hardware acceleration. I guess the next question is… why? Is there a way to check what version of OpenGL I compiled when I did all my program compiling? To see if the version with the tutorial is different than the version I am “using”?

Ok, to see what OpenGL implementation is being used, you need to use glGetString and get the GL_RENDERER and GL_VENDOR strings. You of course need to get the strings after the rendering context is selected.

Originally posted by DFrey:
Ok, to see what OpenGL implementation is being used, you need to use glGetString and get the GL_RENDERER and GL_VENDOR strings. You of course need to get the strings after the rendering context is selected.

Ok, I did a check on both of the rendering things and both of them are using the 3Dfx drivers.

So, where can I go to see what commands or functions are not in the hardware acceleration for an OpenGL driver on a VooDoo Banshee?