Frame Rate Puzzle ?

Recently, it was noted by one of our developers that there was a substantial improvement in frame rate occuring at the highest screen resolution of our systems.

After some more experimentation, it was found to be erratic … sometimes there was no speed improvement … (we opted for a frame rate controlled application and it’s limited to a maximum of 30 frames/sec.)

From our tests, we got the following information :

GeForce2 GTS
NVidia 43.45
Windows 98
Pentium 3 550MHz

Res. Draw Time Frame Rate
640x480 10mS 30/s
800x600 10mS 30/s
1024x768 39mS 25/s
1280x1024 46mS 21/s
1600x1200 70mS 14/s
but …
1600x1200 10mS 30/s

The draw time and frame rate figures are derived directly from our software …

The tests were performed with 4 x antialising and 2 x ansitropic filtering. (Other settings were tried, which improved the performance of the mid-range resolutions but the sudden improvement at the high resolution persisted.)

As you can see, the performance drop off is as might be expected … until 1600x1200 !

At the moment, we’re completely at a loss to understand the cause of this effect … and we’re obviously keen to make this improvement 100% of the time and hopefully improve the speed of the lower resolutions also.

Has anyone go any ideas ?

Thanks

Andrew

Are you running full-screen or windowed?

If windowed, whats your desktop resolution?

Personally I think you’ve got a bug in your timing code.

also;

1600x1200 70mS 14/s
but …
1600x1200 10mS 30/s

You have 2 results for 1600x1200???

Possibly the card doesn’t have enough memory for 4x AA at 1600x1200 and disables or lowers aa quality automatically.

Hmm … timing bug ? Yes we thought that might be the case but we checked the draw time by bracketing the draw routine with an accurate time routine based on the CPU timers … and they correlate.

… it’s running fullscreen …

Card … auto function disabling ? … yes, I agree but it must be doing something else … we get the same effect even when all the cards features are turned off.

I think that I might have to take this one up with NVidia. I was wrong … we only get this effect when we enable 4 x antialiasing … I’d guess this is beyond the card’s capabilities and it gets disabled … but in doing so, something else gets disabled and it’s off like a rocket !

Originally posted by Andrew Jameson:
(we opted for a frame rate controlled application and it’s limited to a maximum of 30 frames/sec.)
Why in the world do people occasionally think that’s a good idea? Completely beyond me …

Fixed frame rates suck, hang that over your bed if you like.

Why in the world do people occasionally think that’s a good idea?

Because it is a good idea.

It is generally better to play a game at a constant 30 than a fluctuating 60. Why? Because the player can quickly tell when the game is running at 60 and 30, and will be able to feel the framerate change. The game will feel more unstable than if you simply ran at a constant 30.

Fixed frame rates suck, hang that over your bed if you like.

Oh, yes. Because you said it, it must be right. You don’t have to actually make a case for an opinion or anything.

Originally posted by Korval:
Oh, yes. Because you said it, it must be right. You don’t have to actually make a case for an opinion or anything.
Oh my …

1)Performance scaling is a good thing. It’s nothing you need to fight, why would you do it? Because you can? Fluctuating frames may or may not be a problem, but they are not curable with fps caps anyway. What if the system is slower than yours and fluctuates between 12 and 25 fps? Cap it to 12 fps?
What if the system is mildly faster and ‘fluctuates’ between 45 and 50 fps? Why even bother?
This is the same old “How can I detect if this and that is hardware accelerated and disable it if it’s not?” thing.
Answer part 1: You don’t want to know.
Answer part 2: Give users the control to adjust performance to their needs and just leave it alone.

If you really want to achieve (<= important word) constant frames, then by all means, implement fps driven automatic LOD, but make sure users can turn it off.

2)It appears we’re not talking about VSync which can at least eliminate double buffered tearing. A 30 fps cap is not going to eliminate tearing. On the contrary, if you don’t simultaneously use VSync (and completely kill performance in the process), an fps cap may even further pronounce tearing. Note that VSync does not violate point #1, because IHVs do offer means to turn it off completely. At least VSync has a purpose.

3)Fps measurement is always late. One frame delay is the minimum, three is a more practical value. Putting the brakes on an application to cap at a given framerate obviously creates a latency floor for graphics output (and to input, sound, network, depends on how you glue your subsystems together). A ~100 ms delay between action and reaction is nothing short of irritating. A ‘normal’ user approach would be reducing detail to get higher responsiveness. ‘Thanks’ to the fps cap, this approach won’t even work.

4)fps limiters occasionally get applied as lame excuses for not doing frame rate independant animation, ie ‘lazy slopjob’. And, oh joy, if the user’s system is too slow to constantly reach the cap, it will not even work.

5)Every commercial game I’ve heard of that came with a frame limiter spawned user interest in how to deactivate it. Last one: GTA3. Coincidence?

Korval, it’s just that there’s no obvious argument for fps caps. “We’ve opted for a 30 fps cap” to me sounds a little like “I’ve put racing holes in my car”. Harshness was applied on purpose, to get the point across as fast as possible.

I don’t need the ‘kids off the block’ on my case … and thanks Korval …

We needed to adopt a complex series of threaded routines within our application and are fully aware of how to implement both forms of graphics velocity controls. In order to prevent the graphics display routines from hogging CPU time, we opted, in this case to go for fps capping and it works very well.

Anyway, if we all ignore the self opinionated comments of others which has added nothing to the original problem … I now believe that the GeForce2 is unable to deal with anti-aliasing beyond 1280x1024 and as such it gets disabled … but in doing so something else happens and there’s a performance increase beyond that achieved if anti-aliasing had been disabled in the first place … the reason why is probably way inside the card firmware or its driver.

The effect has nothing to do with our frame rate control as that can be disabled and we can measure the draw time duration independently.

And what happens if you dont use AA at all?

BTW, 30FPS is a little low for me.

Originally posted by Korval:
Oh, yes. Because you said it, it must be right. You don’t have to actually make a case for an opinion or anything.

I’d be pretty p*#@ed off if I saw that an app was only running at 30fps and upgraded my PC to improve it’s crap performance, only to find it didn’t change.

I’d also say limited framerates suck :slight_smile:

Personnal opinion apart, i think the correct explanation is the color buffer memory usage. A GF2 GTS is 32, or 64 Mb of video memory ? At 1600x1200, 4x AA, 32 bits color buffer, it takes around 30 Mb. That’s only for the front color buffer: i assume you have the back buffer (double-buffering?), the Z-buffer, and maybe textures. It seems likely the card drops AA to free more memory, in which case 1600x1200 no-AA is faster or the same than, say, 800x600 4x AA.

I’d suggest taking screenshots to compare the quality, and see if your 1600x1200 test is AAed or not.

Y.

Nope. If I got this right, frame rates are higher with ‘enabled’ (but non-functional) AA at 1600x1200 than without. Something for nothing, sort of.

Wild theory: might be some subpixel precision thing. The card sees AA and adjusts subpixel precision for it. The pixel iterator is most likely some sort of finite precision integer thingy and would run out of bits for a sufficiently large frame buffer, so it has to scale back.

Then it disables AA (because it’s out of memory) but it keeps the spp adjustment. I remember early 3d cards having control panel knobs for spp, and dropping it usually got you a little extra performance.

/wild theory

The funniest thing is that 25 fps is somtimes more acceptable than 125
All you have to bother about is your camera!
Just search the net for myths about fps and you’ll get proof. Or take a look at Operation Flashpoint

Andrew, try to check if you’re really seeing a performance increase, and not just a timing anomaly. I had a similar problem recently where maximizing my app’s window would make the framerate increase instead of decrease, and my app started to stutter somewhat.

I didn’t have FSAA or AF switched on, and the cards I noticed it on had 64 MB and 128 MB respectively, so I doubt lack of memory and/or automatic feature disabling had anything to do with it. What I eventually found out is that doing an explicit glFinish() at the end of every frame solved the problem.

What’s important to note here is that it was my framerate counter that went up, not the actual performance of my app! I could see the visuals get jerky at high resolutions, yet the framerate counter went up instead of down. I’ve been using the same timing code for ages, so I’m pretty confident that it’s okay…

– Tom

Thanks Tom,
(Apologies for xposting onto your site too!)
At first we all got kind of excited about the sudden ‘improvement’ but soon realized that it was the card that failed to AA at high-res … and up went the performance … but still higher than if we’d disabled AA before starting … so I think that the previous response might well be the reason. Anyway it’s not a feature that we can build from !
Yes … we have that same experience with timing but we use the a QueryPerformanceCounter around the draw routine as a secondary debug reference and it’s always concurred with our frame rate control routine.

Andrew

Fixed frame rates suck do they? You’d better tell that to your monitor, because it refreshes at a fixed rate reguardless of your rendered frame rate.

Fixed frame rates are a good thing, particularly if the frame rate matches your monitor refresh rate. If not some multiple of the monitor refresh timer is desirable.

Fixed frame rates buy you consistent latency and timing. If you have a fixed frame rate you know WHEN the next frame will be displayed, if your frame rate varies with scene complexity or whatever then your animations can be off because you don’t know the elapsed time until it is too late and you’ve already got to start drawing the next one.

In integrated systems with multiple components fixed frame rates and syncs are useful tools to tie components together without visible anomalies.

Why in the world? Because some people care about quality issues, temporal aliasing and system level integration. Don’t blame them, by all means do your own thing but other people have differing priorities.

[This message has been edited by dorbie (edited 05-05-2003).]

You can increase the performance when you set the projection(glOrtho … ). When you set the range of the space to be a less then current your speed will be higher, but the quality lower (it is not the same if you has sphere with radius 50 and clip planes on -55 and +55; and sphere with r=500 and c.p. -550 and +500 even your viewport is the same). All parameters such shading, textures and etc. are calculated before projection and you will have much more calculations if the space between cliping planes is higher. Your accelerator sure draws 3D space onto the buffer with fixet range of “resolutions” or mey be in one resolution and simply scales the image. In this case you can’t know what is optimal resolution (and refresh rate) of any video card and also the programer of the driver may use algorithms with not linear performance n.log(n) (for AA for example) or something else plus the current performance of the operating system, buss and etc. If you stop to trying to speed up your program and start to work when you finish it on the market you will have much better accelerators and you may get one of them .

Regarding frame rate limiting:

I believe the optimal way to handle this is to limit the rate at which geometry/animations are updated. The rendering (frame) rate should absolutely not be limited by an arbitrary value. The end user should have the final say – he knows more about his system’s performance than you the developer do. If he doesn’t mind tearing, he might want a faster frame rate than the monitor refresh. Conversely, if his system can’t even handle your arbitrary frame cap, what’s the point? And later, if he upgrades his system, he’s going to want to see some higher frame rates, right?

In a nutshell, keep it all scaleable.

Oh, and one final point, unless you have a vision impairment of some sort, 60 fps is noticeably smoother than 30 fps. 'Nuf said. =P

Originally posted by dorbie:
You’d better tell that to your monitor, because it refreshes at a fixed rate reguardless of your rendered frame rate.

Hands up all those with a monitor running at 30Hz? Hands up all those with a monitor that ONLY does 60Hz (or is it 50 in the US)? Hands up all those who are fans of Shutter glasses? (Try running them at 30fps)

I can think of 1 situation where a fixed low frame rate (ie. Not letting your application run as fast as possible) might be useful - LCD.