Laggy 2D animation

Hello everybody,

I’m wondering how OpenGL manages images once loaded into CPU memory?

Is OpenGL reloading the image data from CPU memory each frame or it keeps it in the video memory when using glDrawPixel?

Actually, I’m loading my Bitmap data into CPU memory in a simple byte array and each frame I use glDrawPixel to display the back and a sprite over it that moves 1 pixel in a specific direction each frame. About every second it seems to skip a pixel (or it stays on the same pixel for 2 frames maybe?).

The loop displays more than 120 frames per seconds so I don’t really think speed is a problem yet.

What could cause a simple animation to lag like this on a P4 with a GForce4!

Do I have to time the buffer swap or anything???

Any help will be greatly appreciated !

Mr Pink

The simple answer is they do it as fast as they can and want to. If almost nobody uses it, or it can’t be made fast, then it won’t be fast.

I’d suggest storing the image in a texture and drawing a textured quad. This is pretty much guaranteed to be as fast and correct as possible.

DrawPixels, by necessity, needs to transfer the data from RAM to the card every time you use it.

Meanwhile, a texture can be uploaded to the card, and then just applied within the card each time you use it.

Other things that may affect your animation smoothness include whether you have vsync on or off (usually settable in a display control panel setting) and what OS your’re running, and whether there are other applications running (such as task bar icons).

If you’re running 16-bit Windows (98/ME) then the scheduler is pretty dorky, and it may decide to give up to 50 milliseconds of time in one chunk to some other application. That will cause a noticeable stutter.

If you’re running 32-bit Windows (NT based kernels such as 2k and XP) then the scheduler is much better, but there may be system services or other background tasks which still need scheduling every so often.

>>>If you’re running 16-bit Windows (98/ME) <<<

Those are 32 bit windows. You are able to run 32 bit Windows EXE arent you?

V-man: jwatte is somewhat right and somewhat wrong. The 9x series of operating systems had a lot of legacy 16-bit code, but windows has handled 32-bit programs since 3.11 if I recall correctly. Windows 3.11 was out before I started using PCs, so my memory on this is a little shaky.

Also, all you need to run a 32-bit program is a 32-bit processor. You can run a 32-bit program in a 16-bit OS; this is how all the early PC 32-bit programs had to run, back when DOS was the standard OS.

I am facetiously using the name “16 bit Windows” for the DOS-based Windows versions. I find it helps getting the point across that those versions are massive hacks based on no solid footing (no “kernel” per se).

For a long time, I was religiously opposed to NT based Windows because of the horribleness of DOS-based Windows. Once I actually started using Windows 2000 on a daily basis, I realized that it’s not the same thing at all, and a discerning system hacker can actually use the NT based products without feeling all dirty inside.

Now, if MS would only get with the times on the OpenGL API front. They call Windows a “workstation OS” but I’m not sure all those workstation apps would relish re-writing their core rendering on top of DirectX 9…

Hello guys thanks for the first and second answers…

The others were really interesting but real far from the point !!!

By the way I’m not using Windows at all (for more than a year now !!!). I’m using OpenGL on linux (the linux isn’t a distro like RedHat or Mandrake. I compiled out everything myself…ok…ok…except VMware and Netscape and Wolfenstein cause I can’t have access to the source code!)

So I’m comming back to my first question :

How does OpenGL manage video memory while using glDrawPixel and what method I could use (besides using texture for everything even if it would work) to accelerate the rendering of my images to screen ???

Thanks again everybody.

Mr Pink

P.S. Please don’t try to convince me to use any Microsoft product.

I once spoke to a Microsoft’s programmer and he claimed there were still #ifdef 8_BITS
in the Microsoft Windows 98 base code remaining from the old Dos code !!!

By the way I hope everybody knows why NT (NT 4.0 win2K, winXP) is much better and is a real OS base code. The NT base code has been bought from Northern Telecom (NT comes from there but Microsoft changed its meaning a little later to New Technology). You imagine it…The only good OS’s that Microsoft did has been bought from another company. That tells us about the seriousness of the company. Of course I would never blame the programmers (being one myself but not at Microsoft obviously) because it’s never our fault !!!

By the way I’m not using Windows at all (for more than a year now !!!). I’m using OpenGL on linux

Ahh, the eternal war. Will there ever be peace? Personally I’ve used nt since 3.51 and can honestly say I really like. The only problem is that you can only switch os when the 3 sp or so comes out.

…to accelerate the rendering of my images to screen ???

Well, if you are just doing bitblock transfers maybe you should look into some native X calls. Data passed through glDrawPixels has to go through the pipeline, maybe it is slower(comments???).
The only problem with that is that I dont know if you can use X calls and gl calls together. Under windows, for example, you can do both GDI calls and gl calls on the same window/dc/glcontext. Maybe not the smartest thing, but you can.

[This message has been edited by roffe (edited 01-12-2003).]

By the way I’m not using Windows at all (for more than a year now !!!). I’m using OpenGL on linux

Ahh, the eternal war. Will there ever be peace? Personally I’ve used nt since 3.51 and can honestly say I really like it. The only problem is that you can only switch os when the third sp or so comes out.

…to accelerate the rendering of my images to screen ???

Well, if your are just doing bitblock transfers maybe you should look into some native X calls. Data passed through glDrawPixels has to go through the pipeline, maybe it is slower(comments???).
The only problem with that is that I dont know if you can use X calls and gl calls together. Under windows, for example, you can do both GDI calls and gl calls on the same window/dc/glcontext. Maybe not the smartest thing, but you can.

[This message has been edited by roffe (edited 01-12-2003).]

My initial reply is true for all OpenGL implementations and platforms, because that’s how the API is specified.