vertical sync

i have 2 machines, 1 win nt, 512 ram, p3 800mhz, geforce 4, another linux same specs, geforce 2. i am
programming an application that must update every
refresh and only at every refresh. i am only drawing a few squares. i have vsync on, anti aliasing and filtering off via hardware (control panel in windows, env variables in linux) at 100hz i miss 1 update every few seconds. if i try 60 hz, i lose less frames but still some. i am using glut game mode. this happens on both machines. if i turn of vsync, my 15 second application runs in about 5 seconds, so i dont think i am trying to do too much per frame. is there more information i can provide? i need to be sure that absolutely no refreshes are missed. thank you.

nick lesica

You are putting real-time demands on your system. Neither Windows nor Linux are that great at real-time work, unfortunately.

You can try making your process SCHED_FIFO under Linux (requires root), and REALTIME_PRIORITY (requires Administrator) under Windows, which will help some. Warning though: if these threads go CPU bound, the NMI key is your only debugging help. (Thank God for WinDbg!)

Of course, that still doesn’t help if the scheduler suddenly decides that some system process at the same priority needs a 100 ms time slice (the actual time slice quantum under Linux) or some driver decides to stay with interrupts turned off for 50 milliseconds (some sound cards under Windows, say).

If you really have to hit each deadline, everytime, may I suggest an OS suited to the task, such as QNX or VxWorks?

Originally posted by jwatte:
You are putting real-time demands on your system. Neither Windows nor Linux are that great at real-time work, unfortunately.

Allow me to politely disagree. See below


… needs a 100 ms time slice (the actual time slice quantum under Linux)

This is simply wrong. The default time slice the linux kernel is compiled with is 10 ms (not 100), same as windows2000. But you can easily change it (requires kernel compilation) to much smaller slice. I use 0.5 ms and with modern CPUs you get excellent realtime behaviour


or some driver decides to stay with interrupts turned off for 50 milliseconds (some sound cards under Windows, say).

That requires a mighty blunder on the driver writer’s side. I’ve never seen such driver behaviour on Linux.


If you really have to hit each deadline, everytime, may I suggest an OS suited to the task, such as QNX or VxWorks?

Sorry to disagree again. You can acheive that with plain linux (maybe with recompiled kernel for smaller time-slice), and some sifting of h/w and s/w that you use. If you have zero control over h/w and s/w in the system, then you are really in trouble

Ah, another Linux zealot. Why do I even bother, I wonder? The TIME SLICE is different from the QUANTUM.

Just because HZ is 100 (an insanely low number on today’s systems, btw) doesn’t mean that the TIME SLICE is 10 ms. If two equal-priority processes are waiting to run, the scheduler will run one of them for 100 milliseconds before actually switching over to the other.

A similar thing happens on Windows. Actually, under NT kernels, you can set these slices, so that the kernel gives a longer time slice for the process with UI focus.

Personally, I think that using a 100 Hz timer in this day and age is insane. The kernel should program the CPU cycle counter to wake up and re-schedule exactly when whatever sleeping thread needs it. Also, interrupt handlers should have the ability to cause re-schedule (which happens somewhat under most OSes, including Linux, these days). Releasing a semaphore should have the ability to cause an immediate re-schedule/wake-up – which doesn’t always happen today.

Anyway, Linux is not capable of hard real-time response, in any of the major distributions. That’s fine, because it was designed as a UNIX kernel, and it’s a mighty fine UNIX kernel. That doesn’t make it an RTOS.

Try the real time kernel patches, they help a lot. You have to take a wholistic approach to your system though, expecting just to be able to throw any hardware & drivers in there and get away with it isn’t going to work.

The excellent article at the following URL is probably more helpful to you than all the rhetoric and definitions of soft vs hard real time.
http://www.linuxdevices.com/sponsors/SP6145213175-AT8906594941.html

The graphs are screwed up, they’ve been resized by a someone who didn’t know what they were doing, here’s a PDF of the same stuff with the original graphs:
http://www.linuxdevices.com/files/article027/rh-rtpaper.pdf

GLUT isn’t going to do much for you either, you’ll still need to set your process priority etc. I just wouldn’t trust my event loop to GLUT. At the very least grab Steve Baker’s patch to crack open the loop and take control. Good luck. You need to be familiar with how your own app handles incomming events and you need to be intimately familiar with any timers in the code for obvious reason. Using glut main loop or anything opaque is just asking for it.

As you can see from the graphs it looks like the author got down to a worst case jitter of 1.5ms over an extended run that lasted hours, and even that was a one sample aberration, that’s pretty good for what you want to do.

[This message has been edited by dorbie (edited 08-13-2002).]

You can get respectable real-time performance on linux and Win without patches. I have an article and source code (C++, not glut) at http://www.cs.dal.ca/~macinnwj
The linux full screen mode is still a little sloppy, but I’ve been using the Window’s code in experiments for years.
Joe

jwatte: this behaviour can be changed in two ways.
You can give your important processes SCHED_RR scheduling policy. This will make the scheduler code use a different algorithm for them, which will make them switch much faster than 100ms (provided of course that the scheduler
runs
at faster rate, which is acheived by reducing the “quantum” (increasing HZ))
The behaviour you mention will indeed happen for competing processes with default scheduling policy. This is a consequence of the aging algorithm the scheduler uses.
Which brings the other solution – recompile with lower DEF_PRIORITY value. This will “age” default-policy procs faster, effectiely making the “TIME SLICE” lower than 100ms. I successfully verified 5ms time-slice (for competing, non-blocking, default-policy procs) using this method (high HZ, low DEF_PRIORITY)

BTW, I didn’t appreciate that “another Linux zealot, why do I even bother” comment

[This message has been edited by Moshe Nissim (edited 08-14-2002).]

[This message has been edited by Moshe Nissim (edited 08-14-2002).]