PDA

View Full Version : EEK! 100fps->40fps!



TheGecko
03-25-2001, 07:17 AM
ok ouch! I installed Win2K about 2 days ago and all my OpenGL apps dropped from 100fps down to ~55fps! is there something special I have to do in order to program OGL apps for Win2k? (Does is have anything to do with the cursor shadow or something?)

If any Win2K developers here can give me an answer,It would be most appreciated.

davepermen
03-25-2001, 07:29 AM
possibly you have vsinc enabled.. and it depends on wich drivers you use, too

TheGecko
03-25-2001, 07:32 AM
Hmm...I'm using the latest detonator drivers for Win2K form nVidia.

How do I disable vsync?

davepermen
03-25-2001, 08:08 AM
go to the displaysettings of windows, there to where you can choose the resolution/colordepth, to advanced, and there to geforce2mx (or whatever YOU have http://www.opengl.org/discussion_boards/ubb/smile.gif ) and there to additional properties and there to openglsettings and there vertical sync.. disable..

Tom Nuydens
03-25-2001, 08:43 AM
I've heard that the cursor drop shadow can interfere with 3D apps as well. I'd disable it to make sure.

- Tom

TheGecko
03-25-2001, 11:23 AM
Wow this is really wierd....

Without VSync,my blank OGL app runs at ~500fps (is that even possible on a PII 450 with a GeForce2 GTS Win2K 256MB RAM?) And with VSync turned on,I get about 56fps. Needless to say,this is a huge gap.I can't help but feel something else is going on.

Before I installed Win2K on my machine,I had WinME with VSync on.And my test apps ran at about 100fps.The other test app ran at 91fps. Now compare the difference.

Win2K - VSync ON
App1: 56fps
App2: 56fps (both apps ran at the same speed)

Win2K - VSync OFF
App1: ~530fps
App2: ~250fps

WinME - VSync ON
App1: ~100fps
App2: ~92fps

Incase you're wondering,App1 does nothing but draws a grid of lines and I move the camera around with the keyboard and mouse. App2 is my test app that streams an AVI file onto a texture (using glTexSubImage2D()) and draws a quad with that texture map.

So,does anybody know what's going on?

BwB
03-25-2001, 11:41 AM
Is it possible that in WinME you were using a different monitor refresh rate? The way VSync works is it waits until your monitor does a verticle refresh (when the laser is travalling from bottom right to top left of screen). I fyou refresh rate is at 60hz you'll get approx. 60 fps. If your refresh is at 100hz you'll get approx 100 fps. This should be settable somewhere in your display properties as well. Also the DEVMOD structure you pass to ChangeDisplaySettings takes the parameter "dmDisplayFrequency" you can use this to set which frequency you want (make sure you dont go over your monitor's limit though). I've recently added this into my own app. I have a utility that enumerates all the display modes (including refresh frequency) and allows me to pick which one I want my app to use.

TheGecko
03-25-2001, 07:29 PM
In WinME, I had the same refresh rate (75hz) as well as VSync on and everything was working fine (see stats above)

Well,let's look at it in another way. Programatically speaking,what kind of frame rate am I supposed to expect with my machine if I just wrote an OpenGL app (no GLUT) that does nothing (just draws a blank screen) in the main loop?

My machine specs are as follows:
Win2000 Pro
256MB RAM
GeForce2 GTS 32MB DRR RAM with latest detonator drivers for Win2K
19" A90 monitor with a refresh of 75Hz (There is no way in hell I'm going below this because my eyes start to hurt.I can sort of see my monitor refreshing)
VSync ON

Any ideas?

zed
03-25-2001, 07:43 PM
hmmm with vsync on, how about 75fps? ;^)
i believe your programs timer is incorrect
disable the vsync option (in display properties) + try the win swap interval extension program on my site (url in profile). with nvidia cards with win2000 it syncs to refresh in a window. does this hold true for win9x as well, or is it just fullscreen. anyone?

prekipik
03-26-2001, 12:45 AM
windows 2000 implies a loss of performance in 3d graphics, by comparison with 98/ME.

It has been verified in all 3d benches, in Direct3d AND in OpenGL.
Sorry about that, but if you want to have the best perf (but not stability), just use winME.

LordKronos
03-26-2001, 03:18 AM
Originally posted by zed:
i believe your programs timer is incorrect

Yes, I would agree with you, it must be incorrect, based on the following 2 quotes:


Originally posted by TheGecko:
WinME - VSync ON
App1: ~100fps
App2: ~92fps


Originally posted by TheGecko:
In WinME, I had the same refresh rate (75hz) as well as VSync on

There is no way you could have a refresh of 75hz with vsynch on and get a frame rate of 100fps.


[This message has been edited by LordKronos (edited 03-26-2001).]

Nutty
03-26-2001, 04:52 AM
I believe the performance issue is due to AGP being buggered under 2K..

Someone told me that Service Pack 1 fixes this.

Although according to Nvidia you can get near 30 million tri's a sec on Geforce 1 in 98.. I can only manage about 5 million under 2k SP1...

Nutty

MarcusL
03-28-2001, 02:05 PM
30 million tris/sec? .. it is 15 IIRC .. and I've achieved ~7 Mtris/sec when _NOT_ using display lists or VAR. (Reasonably small vertex arrays.)

It could be my CPU that stops at that rate as well, since the app I tested with tesselates bspline-surfaces in realtime.

On Win2k, that is.

I seriously doubt that Win2k vs. WinME can cut performance in half. A wee bit maybe, but not _that_ much.

Win98 is equal to Win2k on my machine at least.

Auto
03-29-2001, 12:09 PM
I had a similar problem - when i was running 98 i could get my GF1 to do 14 million tris/sec - and the vertex throughput was like 400mb/sec (this was Benmark 5 - DX)

under win2k in can get a max of 7 million tris with the same settings - and vertex bandwidth is like 200 mb/sec.

pretty dodgy if you ask me - but then the instability of 9x does my head in more than the performace loss in win2k http://www.opengl.org/discussion_boards/ubb/wink.gif

mcraighead
03-29-2001, 01:02 PM
Originally posted by prekipik:
windows 2000 implies a loss of performance in 3d graphics, by comparison with 98/ME.

It has been verified in all 3d benches, in Direct3d AND in OpenGL.
Sorry about that, but if you want to have the best perf (but not stability), just use winME.



Completely and utterly false. OpenGL is often faster on Win2K than on Win9x.

- Matt

TheGecko
03-29-2001, 03:58 PM
OK I get all that.But what's still confusing me is,I ran the EXACT same app (nothing was changed in the code) and I got that 40% FPS drop.Like I said,it ran on WinME at 100fps and then on Win2K at ~60fps. Now, whether my counter code is wrong or not,this still implies a big frame rate drop.I just want a simple answer as to why this happened.I changed NOTHING in the code.

And another thing.(And maybe I don't understand this point)If you guys are telling me that the max frame rate I can get at 75Hz refresh rate is 75FPS when drawing NOTHING on screen in OGL,then what the hell did I spend $514 CAN on a GeForce2 for!?

And that brings me to another point.Howcome all the DX demo apps that come with the SDK run at reall high frame rates? (~200+fps)I'm going to go on a limb here and assume that the Microsoft counter code is correct http://www.opengl.org/discussion_boards/ubb/wink.gif

I just need lots of clarification here.

[This message has been edited by TheGecko (edited 03-29-2001).]

Eric
03-29-2001, 10:56 PM
Originally posted by TheGecko:
And another thing.(And maybe I don't understand this point)If you guys are telling me that the max frame rate I can get at 75Hz refresh rate is 75FPS when drawing NOTHING on screen in OGL,then what the hell did I spend $514 CAN on a GeForce2 for!?

And that brings me to another point.Howcome all the DX demo apps that come with the SDK run at reall high frame rates? (~200+fps)I'm going to go on a limb here and assume that the Microsoft counter code is correct http://www.opengl.org/discussion_boards/ubb/wink.gif

OK, I'll try to explain this one...

Usually, how do we calculate our FPS ???

1) We take current time t1.
2) We render the scene.
3) We call glFinish.
4) We take current time t2.
5) We call wglSwapBuffers.
6) Go back to 1)

Then, one frame took t2-t1 to be drawn. So you can have 1/(t2-t1) frames per seconds (if t1 and t2 are in seconds of course ! Otherwise, simply convert them !).

Actually, you would probably average these rendering times over 10 frames rather than doing it for only 1 but that's up to you...

Now, let's try to explain why you can only reach 75FPS at 75Hz (or xxxFPS at xxxHz for that matter !).

If your VSYNC is ON (which means the wglSwapBuffers command will WAIT for the VBLANK signal to be issued before swapping the buffers !), your rendering will at least take ONE frame (over the 75 available if you run at 75Hz !). Another way to understand that is to say:

A) Say I am at stage 1) in my rendering and my monitor is on frame 1.
B) When I reach stage 5) (wglSwapBuffers), the monitor will go to frame 2 (because wglSwapBuffers will wait for that) ! Remember that this supposes that your rendering did not take you to frame 2 already !
C) When I arrive to stage 6) and go back to 1) for rendering again, I am, in the best case, on frame 2 !!!!

What does it mean ??? One rendering loop=at least 1 frame for your monitor (cannot be less, but can be more !).
At 75Hz, you have 75 frames available so your max speed will be 75FPS !!!!!!!!!!!!

Now, if VSYNC is OFF, wglSwapBuffers will NOT WAIT for the VBLANK signal so your app can reach 200FPS+ (like the DX8 demos you mention).

I hope this is clear... Don't hesitate to "reply with quote" if some (all ? http://www.opengl.org/discussion_boards/ubb/smile.gif) parts of this message are unclear...

Best regards.

Eric

stephenwilley
03-29-2001, 11:41 PM
When you say you're using the latest dets from nVIDIA; are you really??

Are you using the 7.68s that are just out and approved by the WHQL??

Also, if your app. is coded using GLUT it will favour 9x/Me as it does not take advantage of the FASTER double/triple buffering support provided by 2k. It is important to have SP1 and if you have a motherboard with a VIA chipset then you should install the fix on the MS Win2k website. As for 2k slowing you down; that just shouldn't happen.

I have a small program (it doesn't use GLUT, I do the windowing code myself) that just renders a simple BSP map and I get 2300fps with VSYNC off under 2k and 1600fps under Me with the 7.68 dets on a geForce 256 DDR so your blank app is running with some massive overhead - this is (I'm afraid) down to the very inefficient way in which GLUT works.

Go to http://nehe.gamedev.net and rip his windowing code. Also, use DirectX for the input as it's more responsive and again has less overhead than GLUT.

Oh, and install DirectX 8.0 if you haven't already.

Hope it helps,

Stephen

Asshen Shugar
03-29-2001, 11:57 PM
Actually the fact that you can't get more than 60 fps on win2K with vsync on is a driver bug, I found a fix somewhere for detonator 3 v.6.31, but don't remember where...
Anyway, the new driver will be available soon, probably they'll have fixed it !

Eric
03-30-2001, 02:37 AM
Originally posted by Asshen Shugar:
Actually the fact that you can't get more than 60 fps on win2K with vsync on is a driver bug

Well, if you were running at 75Hz and you can only reach 60FPS, I agree, it's a bug... But if you are running at 60Hz, YOU CANNOT HAVE MORE THAN 60FPS WITH VSYNC ON ! And if you manage to have more, that is when there is a bug ! http://www.opengl.org/discussion_boards/ubb/smile.gif

Regards.

Eric

LordKronos
03-30-2001, 03:43 AM
Originally posted by Eric:
OK, I'll try to explain this one...

Usually, how do we calculate our FPS ???

1) We take current time t1.
2) We render the scene.
3) We call glFinish.
4) We take current time t2.
5) We call wglSwapBuffers.
6) Go back to 1)

Then, one frame took t2-t1 to be drawn. So you can have 1/(t2-t1) frames per seconds (if t1 and t2 are in seconds of course ! Otherwise, simply convert them !).


Very bad way of calculating FPS (and a very commonly made mistake, I must add), as you are going to get inaccurate readings. Whats wrong with this picture? Lets look at a theoretical example here (numbers pulled off the top of my head).

>1) We take current time t1.
Lets say this happens at time 0ms.
t1 = 0
>2) We render the scene.
>3) We call glFinish.
Lets say these 2 steps take 250ms.
>4) We take current time t2.
t2 = 250
>5) We call wglSwapBuffers.
Lets say we "just missed" our vsynch, so this takes 250ms (yes, we're going to pretend our refresh rate is set to 4hz. It may make you blind, but it makes my calculations easy).
>6) Go back to 1)

t2-t1 = 250-0 = 250ms = 0.25 seconds

>1/(t2-t1) frames per seconds
1/0.25 = 4FPS.

So, we calculated that your app is running at 4 frames per second. But WAIT!!! Each rendering loop takes 250ms to render, plus 250ms to flip. Thats 500ms, or 1/2 second per frame. You are actually only getting 2FPS, but your faulty counter code just told you DOUBLE the actual value. See how much the way you time makes a difference? This is why TheGecko was completely off base when he said:

Now, whether my counter code is wrong or not,this still implies a big frame rate drop.

As soon as your counter code is the slightest bit wrong, your readings begin to mean NOTHING!!!! Also note that in the above scenario, you arent taking into account anything other that graphics. If you have AI, physics, network code, etc... and you place them OUTSIDE of your counter loop (like you did with the swap) you are going to exagerate your FPS even more. Whats the moral of the story? The moral is that every single instruction the executes each frame neads to happen INSIDE of the counter loop. The only true way to do that is to make your counter code span from one frame to the next.

The correct way to time your app is:

lastFrameEnd =0
BEGIN RENDER LOOP
frameEnd = NOW
FPS = 1/(frameEnd-lastFrameEnd)
lastFrameEnd = frameEnd
RENDER SCENE (and draw your FPS counter)
SWAP BUFFERS
END RENDER LOOP

Now this will give you truely accurate results (except for your first frame obviously, but it's meaningless in the first frame anyway).

Eric
03-30-2001, 03:52 AM
Oooooooppppppppsssssssssssss...........
Sorry...
I was wrong....

As a matter of fact, I do it properly in my programs but I guess I wasn't too awaken when I posted the message !

Thanks for the pointing out the mistake !

Regards.

Eric

LordKronos
03-30-2001, 03:59 AM
Of course I forgot to mention another common, and probably the worst timing mistake you can make. After rendering a frame, and you have your start time (t1) and end time (t2) in milliseconds:

FPS = (t1-t2)

Then when you do some optimizing, you remove a whole bunch of useless code, and suddenly your app drops from 85FPS to 50FPS. Then you go "WHAT??? I MADE IT FASTER, BUT ITS RUNNING SLOWER???? WTF!!!"

Then someone tells you "uhhhh, you forgot to invert your timer and convert milliseconds to seconds"


[This message has been edited by LordKronos (edited 03-30-2001).]

TheGecko
03-30-2001, 06:50 AM
OK that clears up alot of things http://www.opengl.org/discussion_boards/ubb/smile.gif

I'm going to post up my timing code for you guys to take a look at.I got this timer class from a very reliable source.

As to that whole driver bug thing,I guess that makes sense.For those of you who are wondering what my app is doing, the answer is: NOTHING! And I'm not using GLUT.I wrote the whole window code myself and I have a rendering loop that just keeps swapping buffers and writing out the FPS info.And, like I said,all I'm getting in ~60fps. Now, if what Asshen Shugar said is true,then I don't have much to worry about http://www.opengl.org/discussion_boards/ubb/smile.gif (And yes, I do have the latest detonator drivers from nVidia)

Anyway,here's my timer code.If any of you do spot a bug,please let me know!




#include "EnigTimer.h"

//////////////////////////////////////////////////////////////////////
// Construction/Destruction
//////////////////////////////////////////////////////////////////////

//-----------------------------------------------------------------------------
// CTimer()
//-----------------------------------------------------------------------------
CTimer::CTimer()
{
// We need to know how often the clock is updated
if( !QueryPerformanceFrequency((LARGE_INTEGER *)&m_TicksPerSecond) )
m_TicksPerSecond = 1000;

m_fFps = 0;
m_bRunning = false;
}


//-----------------------------------------------------------------------------
// Start()
// Reset counter and start the timer
//-----------------------------------------------------------------------------
void CTimer::Start()
{
// Get the current time so we know when we started
if( !QueryPerformanceCounter((LARGE_INTEGER *)&m_BaseTicks) )
{
m_BaseTicks = (UINT64)timeGetTime();
}

m_bRunning = true;

m_fLastUpdate = 0;
m_dwNumFrames = 0;

m_fFrameTime = 0;
m_fDeltaTime = 0;
}


//-----------------------------------------------------------------------------
// Stop()
// Stop the timer
//-----------------------------------------------------------------------------
void CTimer::Stop()
{
if( m_bRunning )
{
// Remember when we stopped so we can know how long we have been paused
if( !QueryPerformanceCounter((LARGE_INTEGER *)&m_StopTicks) )
{
m_StopTicks = (UINT64)timeGetTime();
}

m_bRunning = false;
}
}

//-----------------------------------------------------------------------------
// Continue()
// Start the timer without resetting
//-----------------------------------------------------------------------------
void CTimer::Continue()
{
if( !m_bRunning )
{
UINT64 Ticks;

// Get the current time
if( !QueryPerformanceCounter((LARGE_INTEGER *)&Ticks) )
{
Ticks = (UINT64)timeGetTime();
}

// Increase baseticks to reflect the time we were paused
m_BaseTicks += Ticks - m_StopTicks;

m_bRunning = true;
}
}


//-----------------------------------------------------------------------------
// GetTime()
// Get the current time
//-----------------------------------------------------------------------------
float CTimer::GetTime()
{
UINT64 Ticks;

if( m_bRunning )
{
if( !QueryPerformanceCounter((LARGE_INTEGER *)&Ticks) )
{
Ticks = (UINT64)timeGetTime();
}
}
else
Ticks = m_StopTicks;

// Subtract the time when we started to get
// the time our timer has been running
Ticks -= m_BaseTicks;

return (float)(__int64)Ticks/(float)(__int64)m_TicksPerSecond;
}

//-----------------------------------------------------------------------------
// Frame()
// Call this once per frame
//-----------------------------------------------------------------------------
void CTimer::Frame()
{
m_fDeltaTime = GetTime() - m_fFrameTime;
m_fFrameTime += m_fDeltaTime;

// Update frames per second counter
m_dwNumFrames++;
if( m_fFrameTime - m_fLastUpdate > FPS_INTERVAL )
{
m_fFps = m_dwNumFrames / (m_fFrameTime - m_fLastUpdate);
m_dwNumFrames = 0;
m_fLastUpdate = m_fFrameTime;
}
}


//-----------------------------------------------------------------------------
// GetFps()
//-----------------------------------------------------------------------------
float CTimer::GetFps()
{
return m_fFps;
}


//-----------------------------------------------------------------------------
// GetFrameTime()
// This is the time when Frame() was called last
//-----------------------------------------------------------------------------
float CTimer::GetFrameTime()
{
return m_fFrameTime;
}

//-----------------------------------------------------------------------------
// GetDeltaTime()
// This is the time that passed between the last to calls to Frame()
//-----------------------------------------------------------------------------
float CTimer::GetDeltaTime()
{
return m_fDeltaTime;
}

//-----------------------------------------------------------------------------
// IsRunning()
//-----------------------------------------------------------------------------
bool CTimer::IsRunning()
{
return m_bRunning;
}
//-----------------------------------------------------------------------------


[This message has been edited by TheGecko (edited 03-30-2001).]

thewizard75
03-31-2001, 10:40 AM
Yes, there is a bug with Nvidia and w2k that prevents opengl apps from using more than a 60Hz refresh rate (unless a workaround is used). So yes, this is the difference

LordKronos
03-31-2001, 11:52 AM
Originally posted by thewizard75:
Yes, there is a bug with Nvidia and w2k that prevents opengl apps from using more than a 60Hz refresh rate (unless a workaround is used). So yes, this is the difference

Thats bull, because my app has always run at 75 FPS with 75Hz refresh & vsynch enabled on Win2K with a Geforce2 GTS 64MB and any of the 6 or 8 drivers I have tested so far.

stephenwilley
04-02-2001, 03:44 AM
I have to agree with LordKronos; that is an absolute pile. My OpenGL app runs at over 2000fps on 7.68 nVIDIA reference drivers under 2k now. It runs that fast as it does practically nothing, not 'coz I'm a great coder or hack drivers or even 'coz I know what VSYNC is...

The nVIDIA guys would have sorted that out immediately. It's really quite simple:

1) Get the 7.68 Detonators...
2) Install DirectX 8.0a... (won't help OGL performance but will stabilise the 7.xx drivers as they're optimised for DX 8, not 7)
3) If you've got a VIA board, get the patch from MS...

I'm sorry to say that a lot of people have been posting utter rubbish in response to this problem. 75 fps with 60Hz VSYNC! I don't think so!!

If you want more information on any of the above then post onto the video cards forum @ http://www.hardwarecentral.com

Stephen

Thaellin
04-03-2001, 04:28 AM
Just a quick note in regards to the "If I can't get more than 60fps why did I buy this card?" comment:

If your program is capable of rendering at 200+ fps with vsync disabled, then you can add a lot of functionality. Say you add so much functionality that the frame-rate halves. You now have an application which can display 100fps - with no visual difference in performance. The monitor still refreshes at 60Hz, you still have 60fps.

If you wish to /see/ more frames per second, you need to do two things:
1. Turn up the refresh rate for your monitor (75Hz, perhaps?)
2. Get a bionic eye, because standard-issue human eyes have difficulty distinguishing the performance when you get to this level.

Also note that there is a reason 'vsync' exists. The monitor redraws the entire screen 'X' times per second (where 'X' is the refresh rate we've been touting about). When you throw data into the frame buffer in the middle of a vsync, the image on one part of the monitor will not correspond to the image on the rest of the monitor. 'vsync' waits for the moment when the ray gun in your CRT is moving from the end of its refresh to the beginning of the next, and updates memory in that window of time. The monitor gets the whole image without discontinuity this way.

-- Thae

thewizard75
04-03-2001, 05:42 AM
May I direct your attention to http://www.geforcefaq.com/#sw:drv:refresh

Yes, the Nvidia drivers do have refresh rate problems in w2k.

LordKronos
04-03-2001, 06:45 AM
Originally posted by thewizard75:
May I direct your attention to http://www.geforcefaq.com/#sw:drv:refresh

Yes, the Nvidia drivers do have refresh rate problems in w2k.

Well, lets see...your original post is:


there is a bug with Nvidia and w2k that prevents opengl apps from using more than a 60Hz refresh rate (unless a workaround is used). So yes, this is the difference

You make it sound as if NO opengl program on ANY win2k system will EVER get more than 60FPS UNLESS a workaround is used. What this link mentions is a particular compatiblity problem with a specific piece of hardware (and by the way, it says 75Hz for this bug, not 60Hz).

The rest of the bullet points for that question have nothing to do with nvidia nor opengl.

TheGecko
04-03-2001, 10:29 AM
Well then where is the problem? I'm running an OGL app (no glut) that does nothing except draw the FPS on screen.And all I'm getting is 60fps with VSYNC on at 75Hz refresh rates.Where is y other 15fps going. I just need a simple answer.(By the way, I don't think my timer code is wrong.I've posted it above)

TheGecko
04-03-2001, 10:31 AM
oh I almost forgot.I'm using whatever drivers I got from nVidia's website for Win2K.I don't wish to use any "leaked" drivers http://www.opengl.org/discussion_boards/ubb/smile.gif

LordKronos
04-03-2001, 01:37 PM
Originally posted by TheGecko:
Well then where is the problem? I'm running an OGL app (no glut) that does nothing except draw the FPS on screen.And all I'm getting is 60fps with VSYNC on at 75Hz refresh rates.Where is y other 15fps going.

Question...are you changing the screen resolution? If so, you might be changing it to 60Hz without even realizing it. Can you post the app and source somewhere where I can download it and try, because it seems fine to me.

TheGecko
04-03-2001, 02:21 PM
Hmm...I am actually.My desktop is at 1280x1024 but my app resizes to 1024x768. I'll post a link to my app here as soon as I get back from work.

Thaellin
04-03-2001, 05:24 PM
If you manually change your desktop resolution (in Windows) to the resolution your application runs at, and then modify your refresh rate (or modify the target resolution's refresh rate with a handy application such as comes with the ASUS GeForce2 boards), then you can be sure of the refresh at that resolution...

It seems strange that the default rate for 'all' resolutions is not more easily or intuitively modified than it is... *shrug* MS doesn't expect people to muss with system settings very often I guess.

-- Jeff

Warrior
04-03-2001, 06:32 PM
how do you get 2000 fps + ??
in 640 x 480, fullscreen, no vsync, rendering nothing but my fps, i get 1130 fps.
thats with a geforce2 ultra.
what did you do to get 2000+?
one more thing im using the latest leaked nvidia drivers. 11.1 i believe.

id also like to point out that on my configuration, if im in 640 x 480, vsync has
no effect. does anyone know why?

thanks - Dave



[This message has been edited by Warrior (edited 04-03-2001).]

stephenwilley
04-04-2001, 04:19 AM
Probably coz I'm not using GLUT.

I've written my own interaction loop so that I don't have to use GLUT's incredibly slow one.

Also, I have built a very cut down window class. I dunno what machine spec your GF 2 is running on but mine's just a 256 ddr on a 1ghz athlon (nothing overclocked).

Stephen

stephenwilley
04-04-2001, 04:20 AM
And another thing...

the 7.68s are official drivers; they've been approved my Microsoft and are to be released very soon. I get them early as I'm a registered nVIDIA developer so they are not illegal!

Stephen

Warrior
04-04-2001, 01:21 PM
nah im not using glut either.
but youve got ddr ram, so thats probably why its so fast im guessing.


does microsoft approved drivers just mean it has their digital signature?? if so, where do you download these approved drivers, cause the reference drivers on the nvidia site dont have a microsoft digital signiture.


thanks- Dave

[This message has been edited by Warrior (edited 04-04-2001).]

stephenwilley
04-05-2001, 04:00 AM
They've been signed by the WHQL so make of that what you will.

I've just checked nVIDIA's developer driver list and can't find them there; as I said in an earlier post, they're due out pretty soon. So I probably got them from hardwarecentral. Go to the video card discussion forum and just do a search for 7.68 and you'll find them

Hope it helps,

Stephen

TheGecko
04-07-2001, 11:36 AM
Hmm...this is interesting.Apparently there is a fix for the 60 Hz problem for nvidia drivers and Win2K.Here's the link.I'm downloading it right now.I'll tell you guys what happens.
http://www.tweakfiles.com/video/nvidiarefreshratefix.html

TheGecko
04-07-2001, 11:47 AM
Woohoo!!! That program did the trick! Now I get the full 75fps with VSync on!!!!