Hardware Interupt Priorities

I know this is generally a big security hole but I need to program a specialized application that would make the interrupt for the video card a higher priority then the interrupt for the disk write.

What I mean is the graphics card must never drop a frame or be delayed in displaying one, and that data for the hard drive must wait for a cycle that the graphics card is not using. If the graphics card needs the cpu it outranks whatever process is running.

Currently we are doing this with older OSs like DOS and OS 9 that allow programs to control hardware interrupts. These are computers that are dedicated to this function so security is not an issue. The only current OS that I know of that can do this is a stripped down real time kernel Linux.

Does any one know of any other OS that can do this? And more specific to this forum, is there another way to set interrupt priorities within openGL? (I doubt there is but it doesn’t hurt to ask the experts right?)

thanks!

remind me of when the graphics card needs the cpu?

There is no way to do this with OpenGL. OS 9 is not THAT old, is it? If it works there, be happy, i doubt that you will get it to run on any other modern OS (except for Linux, but then the question is, whether you will be able to enable OpenGL support properly).

Jan.

I have same problem… Simple minimize/maximize action on any window in WinXP cause ~250ms stall, no matter what priority have rendering thread. This sucks…

Increasing the priority of the interrupt will not do much for you. The interrupts are really only used for notification of say a monitor being attached/detached, power notifications and a couple of other things that have nothing to do with what you are wanting. Even then, the interrupt handler pretty much just writes some information to another thread’s buffer indicating what happened and any relevant information, then return. This way the interrupt handler does not block other threads for very long at all.

Anyway, controlling buffer flips is not in that loop. GPUs get command buffers that are a series of instructions for them to execute. They can have such commands as wait for vertical blank and then issue commands to change the RAMDAC’s buffer pointer. Everything is queued up. GPUs also generally use a time stamp mechanism such that it writes it back to some point in memory such that the driver can see what the GPU has completed and hat is still outstanding.

If you are dropping frames then you probably need to look elsewhere for why it is happening. Are you running out of VRAM and paging is becoming an issue? Is the driver going down a slow path because of some setting you are using? Is there a more optimal way of getting your data to the GPU?