frame-by-frame ("realtime") OpenGL

I am creating an open source library for creating visual stimuli used in vision research experiments. (If you’re interested, it is called the Vision Egg, at http://www.visionegg.org/ )

The most important task of of this library is the realtime creation of graphics. The library is cross-platform OpenGL code written in python, although anything platform specific is in C (not much). Also, anything that runs too slowly in python could also be done in C, although that hasn’t been necessary. The library runs on just about any PC, although due to limited time and resources, most of my development and testing has been on relatively recent Athlon based computers with nVidia GeForce 2 and 3 graphics cards under linux.

The code is already quite mature and works well for experiments we’re doing at monitor refresh rates of up to 200 Hz, but before the momentous 1.0 release gets too close to change the code, I wanted to ask some questions to which I have not found answers. I have tried to make clear my thought process, rather than asking a few quick questions which I think are difficult to ask without the context.

Perhaps the most critical unresolved issue is that of potentially skipped frames and frame-by-frame control. In many experiments, the researcher wants to control exactly what happens on each and every frame.

Many applications (such as games) have a physics engine that calculates an instantaneous value for a variable (such as the position of an object on the screen) depending on the current time t. However, in my experience, there is always variable latency between this time t and when the frame is actually displayed on the monitor. This effect is minimized when using double buffering and synchronizing buffer swaps with the vertical retrace/blanking signal because the application is typically released from a waiting just after the buffer swap/vertical retrace, and thus the latency is relatively constant before the next frame is drawn. However, if buffer swapping and vertical retrace are synchronized anyway, it’s natural to count frames drawn and use that to calculate a time t by a simple “t = frame_number / frame_rate_hz”. In my experience, this give the least temporal jitter when calculating values for display. Therefore, this method is used by the library, but there is one small catch. Frames could get skipped, although in practice that is very rare. Of course, it would best to be able to say frames are never, ever skipped, and that is the point of this email.

The root of the problem seems to be that the kernel of a pre-emptive operating system is, by definition, able to take control of the CPU from an application and use it for some time before returning control to that application. As long as the application is given enough time to draw a frame before the next display update, everything is fine. Using the method described above, at least 99% of the frames are drawn without skipping in both linux and windows. (Mac OS X on a 867 MHz G4 with geforce 2 MX skips quite frequently, though.) However, there is still the possibility that some other process or the kernel itself keeps the CPU from the application in time to draw a frame without skipping a display update. In linux I’ve tried the low latency kernel patch for kernel 2.4.12 and setting maximum FIFO priority and disabling memory paging with mlockall(MCL_CURRENT|MCL_FUTURE), but an application may still skip an occasional frame.

One question I have is whether this is exactly what a realtime OS is designed to do. I think it is, but my question is then-- can I have an OpenGL application that runs in realtime? Can I trust that the video (and any other needed) drivers exist in the realtime environment? (Also, can I trust python? Could I trust C?)

Another idea I had was that if I had a dual CPU machine, I could run the application at a high priority on one CPU, and the kernel could run along with any other processes on the other CPU. I’ve tried this under linux, but it certainly doesn’t work automatically. I’ve found various resources on the internet that point to info about restricting what processes can run on which CPUs, but everything I found looked a few years old and not very well documented or easy to implement. The multi-CPU idea may work on many operating systems, but the CPU-restricting options may not be cross platform at all.

Another idea is whether I can use something like a realtime clock under linux (and other OSs?) to create interrupts at a regular invervals, and draw frames when that signal is received. This idea seems like it would be fairly hard to implement and difficult if not impossible to port, so I haven’t persued it at all. I think some platforms (SGI?) can generate an interrupt has a vertical retrace, so this would be a similar idea.

Tripple buffering might fit in somewhere here, but I’d like to avoid software buffers if possible.

Thank you in advance for any information related to these questions!

Andrew Straw

[This message has been edited by astraw (edited 04-26-2002).]

You might find some useful info if you have a hunt around OpenML (www.khronos.org). If you haven’t seen it before, OpenML is focussed on multimedia. It’s designed to be tightly integrated with OpenGL. and there were a few OpenGL extensions specifically to address the need for very precise control over timing. (IIRC many of these are also being incorporated in some form into the OpenGL 2.0 specification, though I don’t have the reference handy at the moment.)

Bit vague I know, but maybe it’ll help…

I’ve had similar (unresoloved) concerns with timing using opengl and linux, where reponse times are an issue. Apparantly RTlinux is not appropriate for OpenGL.

There have been a couple of papers written on the issue of using with opengl and linux in vision and auditory research which appear to support the use of SCHED_FIFO:

Finney, S.A.(2001) Real-time data collection in Linux: A case study. Behavior Research Methods, Instuments, and Computers (not sure of volume or page no’s as i have copy of manuscript); and

MacInnes, W.J. & Taylor, T.L. (2001) Millisecond timing on PCs and Macs. Behavior Research Methods, Instuments, and Computers. 33, 174-178.

I’d be very interested to know what solution you come up with, and whether the use of SCHED_FIFO was adequate to address this problem.
Simon

Just checked out visionegg and note that you’re already using SCHED_FIFO etc for control of real-time. i’ve been unable to get the install script running though (it cannot find distutils.core). It’s great to see someone working on programming Opengl and linux to be reliable enough for research purposes. Have wanted to spend more time learning about RT myself. Hope the papers come in handy for references.