OT : Timing difference between win 9x and win2k/xp

Does win9x handle some real-time events better than NT kernel Oses?

timeGetTime is used here instead of QueryPerformanceCounter because I am only testing Sleep where the resolution must be 1 millisecond or greater.

I have the bare minimum of services running on Win2k Pro and no other apps running.

code below :

/* Tests resolution of Sleep(1).
On Windows ME a consistent 1 millisecond was observed, but on Windows 2000
the average leans towards 2 ms

    If compiling this, link to winmm.lib

*/

#include <windows.h>
#include <mmsystem.h>
#include <iostream.h>
#include <stdlib.h>

void main()
{
unsigned int elapsedTime = 0, startTime = 0, stopTime = 0, i = 0, j = 0;
timeBeginPeriod(1);
Sleep(100); // if this isn’t included then the first couple of results will be 15-16ms

for( i = 0; i < 10; i++ )
{
for( j = 0; j < 20; j++ )
{
startTime = timeGetTime();
Sleep(1);
stopTime = timeGetTime();
elapsedTime = stopTime - startTime;
cout << elapsedTime << " ";
}
cout << endl;
}
system(“pause”);
timeEndPeriod(1);
}

Because I was curious myself, I went off and did your research for you:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/multimed/mmfunc_2q3p.asp

Yes, the default precision for timeGetTime is lower on Win2K, but steps are described therein to change the precision.

Perhaps you weren’t wearing your glasses and failed to see timeBeginPeriod(1) and timeEndPeriod(1) in that code.

  • sound of whip cracking *

Ouch. Well, that’s the last time I try and offer a suggestion.