Framerate slow on Intel

I’ve developed a simple graphics engine that works pretty well on my dev machine, which is using an ATI graphics card. But when I tested on a laptop with an Intel card, the framerate was pretty low.

On the dev machine, when I’m drawing just the environment (just a grid on the “floor”) I get 214 fps. When I add an object with 100,000 polys, that drops to 156.

On the laptop, I actually get 278 fps when drawing just the environment. But when I add the same 100k poly object, the framerate drops to 28.

I’m using display lists, with no textures. I’m trying to figure out VBOs right now, but that’s going slowly.

So I’m wondering what to do about this. Are display lists not supported on Intel? Would VBOs work better?

The performance is hard to compare if we don’t know about which Intel and ATI GPUs we are talking. Intel GPUs are not known for there performance or good OpenGL drivers. Display lists can be slow, they can be quick - they seem to run on your Intel, so they are supported. Going away from the fixed function pipeline and deprecated stuff is probably a good idea in general. There could be a lot of performance bottlenecks remaining…

The ATI renderer is “ATI Radeon HD 3200 Graphics.” The Intel renderer is “Mobile Intel® 4 Series Express Chipset Family.”

You shouldn’t expect too much performance from that Intel chip. It should support OpenGL 2 or 2.1 while the ATI should go up to 3.3.

Okay, thanks for your help. Vielen dank.

[QUOTE=ampersander7;1242254]But when I tested on a laptop with an Intel card, the framerate was pretty low.
[/QUOTE]

Because intel graphics cards are fucking shit. Count yourself lucky your program even runs on intel. That said the intel card probably uses shared CPU/GPU memory, so uploading to a display list or VBO may not give you much/if any gain really, since its like reading out of main memory anyway.

Yes, Intel GPUs use shared memory, but: display lists can be optimised and one GL call to draw a VBO will always be faster than calling 100k times glVertex in the immediate mode, regardless of where your data resides.

I don’t know what exactly you are rendering, but you should attempt to optimize before blaming the GPU, even if Intel GPUs are bottom of the barrel (only good for web surfing).

Organize your models by shaders, by textures, by other state changes that you perform often. Calling glUseProgram for every single object will drag down performance and the same applies to glBindTexture.

and a few tips here
http://www.opengl.org/wiki/Performance#Optimising_Performance

Well, I think I will still blame the GPU, but I also have no choice but to optimize. This is commercial software that people will try to run on their laptops.

I’ve experimented a little with VBOs, but at this point it would be a real drag to rewrite my code base, and I’m not seeing any gains from them (although I’m sure my implementation is primitive).