Slowing down a GeForce4!

I was thinking about buying a GeForce4 card for my computer (both for development and games).

If I write applications using OpenGL, on my computer with a GeForce4, its going to be (hopefully) quite fast. Obviously, not everyone is going to have a GeForce4 or equivalent card. How do you test the performance of your application to see how it runs, for example, on a GeForce2, or even a TNT2 card without actually owning the card?

Is there a way of slowing down your GeForce4 or disabling some of its features?!

create a demo of your work wich spits out statistical info (or sends it directly to a webpage), and put this on a webpage. provide the link for example here in, and you see it…

just wait a little for buying a gf4, soon the r300 from ati is out, as well as nvidia is planing to spit out the nv30. possibly we see some prices fall next week or so, dunno…:

Any word on the release date for nv30 based products?

Prolly just before christmas.

prolly more than 6 months, they didn’t even start producing I_READ_SOME_WHERE

we’ll see, but the VERY closed statements till now from nvidia about anything detailed are well… nerving…

I wouldn’t believe everything you read on the net. Apart from lots of gossip and rumour-mongering, theres nothing to suggest they wont be ready by the time they officially stated.

what time they officially stated? i don’t see any statement on a releasedate on their page…

its www.tomshardware.com, a wellknown and proven to be quite good page. i don’t read pages like nvnews and that. they are so onesided. and you know, onesided leads to stupid religios/rassistic wars. (not really here, but even buying an ati or an nvidia means money to other persons, and if one company dies this costs jobs, wich hurts people… so its sort of important to be fair even here instead of religios )

see ya.

and it was just a statement… (nvidia guy at siggraph stated they are nowhere near any soon release… and i mean, between december and august is quite some distance… so december can sure be… but i don’t know an official date…)

Last I read, Mr. Huang said he expected the NV30 to be ready for Christmas. Even though he also said that as of this past July, it hadn’t been taped out yet.

Don’t slow down your GeForce4, write better code! What I mean is, your animations should be keyed by time not by framerate, so your games should execute the same on any system that exceeds the minimum specs. Just remember to hardcode maximum frame and animation speeds so you don’t munch excessive clock cycles on really fast machines. While I don’t know if it is computationally feasable to mimic a lower fillrate in software, you can emulate a slower card by waiting an extra number of milliseconds after every frame is drawn…

render(){
get_time();
do_render();
get_timeelapsed();
sleep(time_elapsed/CARD_SPEED_FACTOR);
// relative to your card speed
}

The solution is to ask someone who does have the card to test it for you. That’s the only accurate way to do it.

I agree, Test it ON the card to see if it works the way you want. ALL my animations are based on frames, (alot easier to manipulate data per frame, than per Clock cycle when working with different OSs, in my opinion anyways).

Also I simply tell GLUT not to call the screen update, untill 16milliseconds has passes since the last update. This way i get roughly 60FPS on cards that can put it out. I also have incorperated a method, that tells the program, that if the Timer Fires, before the scene is done being drawn, to simply waite for the scene, (then) turn off some of the Higher level needs for a few frames, and if this helps, leave them off.

This kind of method, works for me. It automaticaly will set things to an optimum level within the game execution, to run on ALL cards, (within reason). It may loose some visable clarity for players, BUT, atleast the program is playable to ALL players.