Choppy and tearing... I don't understand

I’ve been programming with OpenGL for quite some time now – nothing too fancy, but I’ve got some experience. I’ve recently started trying to program a simple side scroller game. I figured I just needed to put in a backdrop, scatter a few textured quads around, animate a quad for a character, then simply move the camera from side to side. So I went ahead and programmed all this up only to be plagued by horrible choppy movement complete with tearing and jerkiness.

After much frustration I started over with some base code from NEHE. I popped a non-textured quad up on the screen and paned the camera back and forth – I couldn’t even do this smoothly.

The problem seems to be related to basing movement on time.

I.e.

// Process Application Loop
tickCount = GetTickCount ();
Update (tickCount -window.lastTickCount);
window.lastTickCount = tickCount;
Draw ();

void Update (DWORD milliseconds)
{
//pan right
if (g_keys->keyDown [VK_RIGHT] == TRUE)
{
g_playerPos[0]+=(float)(milliseconds)/5.0f;
g_lookAt[0] +=(float)(milliseconds)/5.0f;
}

//pan left
if (g_keys->keyDown [VK_LEFT] == TRUE)
{
g_playerPos[0]-=(float)(milliseconds)/5.0f;
g_lookAt[0] -=(float)(milliseconds)/5.0f;
}
}

I thought this would be so simple! Huh! I’ve tried it on a couple of Radeon 9000 series cards, and I’ve got lots of books on programming which all do a just as poor a job of rendering. I know I shouldn’t need more power for this! (The slower the card the worse the problem however)

I’ve looked at some of the older “Donut” code from the Directx SDKs, but it won’t compile, and the DX9 demo is way too complicated. Anyway, I’ve noticed they use “blits” a lot for speed. Does anyone know any OpenGL tricks for speed like this?

Please, if anyone is knows this problem, or even knows where I can get some side scroller code (that will compile with modern SDKs OpenGL or DX9), I would really like to hear from you! There must be something dead simple that I just don’t get.

I’d like to attach the code but I’m not sure there’s a way to do that - I’ll mail it to anyone that wants it though!

Rob.
robi250@hotmail.com

Textured quads on decent hardware will be faster than blits anyway.

Do you have v-sync turned on? Have you tried panning a fixed value (non time adjusted) per frame, and see how that looks?

Send me a quick demo of the jerkiness if you like, its hard to diagnose these problems without seeing it 1st-hand.

Nutty

This is not an advanced question and is probably not even going to be an OpenGL question.

You don’t define what you mean by “Choppy”. Is the motion not even? ie. The velocity of your Quad is irregular? Or are you getting flickering?

The first problem is probably due to your using GetTickCount(). Use QueryPerformanceCounter() instead.

The second may be due to you not using double buffering.

I expect one of the moderators will close this thread soon…

Write your own event loop, don’t rely on interrupts. Also make sure you take your time after a glFinish call at a consistent place in your code, probably after you call swap, but maybe immeditely before, it depends.

Also make sure your swap is synch’d to vertical retrace.

Okay, I’ll try to be a little clearer about this. True this might not be purely an OpenGL question (hope you’re not too offended!) however I am rendering using OpenGL, and I am looking for advice from experienced programmers.

When I say “choppy” I mean that the quad is not moving along the screen in the nice smooth manner as one would hope for. Instead it tends to make small jumps across the screen. I expect this is due to the way I am trying to update the movement by using elapsed time. When I remove the time based update code it runs fine – but of course the speed is not uniform across varying computers.

I spent a fair amount of time reading previous posts before asking this question so I’ve already tried setting the v-sync, double buffering(!), and the QueryPerformanceCounter() call. I believe that the actual time it takes to complete the event loop is the problem – I need consistency.

Dordie – There’s a lot to your answer and I think the solution may be in there somewhere.

  1. When you say to write my own event loop you’re saying to trim the fat – any further advice about that?

2.Regarding taking “your time after a glFinish call”, could you expand on that too?

  1. Also, is ensuring that my “swap is synch’d to vertical retrace” the same as switching on vertical sync in the hardware, or are you talking code?

I’m thinking that the game really needs a way to purposely give the OS etc. regular time to do whatever house keeping it requires to keep it from taking it intermittently.

Surprised Dorbie didn’t move this to the Windows forum.

Anyway, the typical game loop on Windows looks something like:

while( true ) {
PeekMessage( …, PM_REMOVE );
TranslateMessage();
DispatchMessage();
Update();
Draw();
}

If you use WM_TIMER or some other way to decide to draw or update, then your code is going to jitter badly.

Also, yes, making sure that you get VSync is a control panel setting, and can also be controlled using the WGL_EXT_swap_control extension (although the control panel can override that).

I’m assuming you’re drawing textured quads.

If you want to time OpenGL drawing time for a single frame, you should START by doing a glFinish(); then you should read the timer; then you should draw; then you should wglSwapBuffers(); then you should glFinish() again. I don’t recommend doing this for production measurements, though, as this will stall the pipe a bit.

I didn’t realize that there was a Windows forum. I’ll start looking for information over there. Dorbie – can you move this discussion to that forum?

Jwatte – Thank you for the information.