opengl timer

How would i make a timer to count down a certain amount of time then draw a quad? i know how to draw the quad but not sure how to make the timer.
i have a font system already put up so all i need is the timer…

font system is set up like

void Printtext(long shadow, float x, float y,float a, char *text,float fvalue)

shadow is text shadowing;
x is x position from mid point of horizon;
y is y postion from mid point of verticle;
a is alpha for already chosen text color ;
char *text is text to be written to screen;
fvalue is the value for char *text if %i is added.

If you use GLUT, glutTimerFunc() is what you look for.

Wolfgang

or you can use the timeGetTime() function which returns the time in milliseconds (since the system has stared).

then you can just do something like this:

if (timeGetTime() - startTime > wait) draw the quad

where you put the following somewhere in your application initialization function

startTime = timeGetTime();
wait = 1000; // wait one second before drawing a quad

you can do this in a million other ways. this is the simplest one.
but, i hope it helped.

Take care,

Rados

For win32, I really raccomand to use the so called “performance counter”. It’s somewhat slower than other methods but the increased precision is very useful, especially when developing “prototype” programs, which may reach very high frame rates.

In case you mind about portability, it’s very sad but the only solution which is as accurate as the perf counter is to use RDTSC and CPUID. This requires some asm, but it’s not so bad.

Clearly, looks like win32 perf count is based on RDTSC. It’s a shame other OSs does not have similar functionality since I heard most CPU are actually providing a RDTSC-like instruction.

I haven’t looked in to it but this seems to be an option if you are running linux

http://www.scl.ameslab.gov/Projects/Rabbit/menu.html

This seems to be related to performance counters, which are somewhat different from the time stamp counter. In some cases you could use the performance counters to have time measurements but the whole subject is much more complicated.
After an admittedly short look at the library it seems to be much more complicated than what’s needed for most people.
I’m not sure using that library pays in terms of portability and time spared.

GLFW has a very simple, precise and portable timer (the glfwGetTime() function returns a 64-bit float with an accuracy that is bettar than 1 us on most systems).

It uses RDTSC on Pentium CPUs (regardless of OS), and falls back to QueryPerformanceCounter or timeGetTime under Windows if necessary. Under IRIX it uses a hardware counter with ~30 ns accuracy. Under Solaris it uses gethrtime. The fallback timer under Un*x/Linux is gettimeofday, which gives pretty good accuracy on most systems. Under AmigaOS (if you care) the timer accuracy is about 1 us. Under MacOS X it currently uses GetCurrentEventTime, but in theory it could use the PowerPC CPU counter.

On Win32/Win64 you could use a waitable timer (see CreateWaitableTimer, SetWaitableTimer).

This allows you to specify a timeout period which will fire an event once triggered. You can also attach a method to be called when the timer is signalled if you wish.

The advantage to this is that you won’t have to write a loop to wait for a specified amount of time which may (depending upon how you write it) thrash the CPU.

It is only accurate to 1 millisecond though (and potentially less than that).

Matt