View Full Version : Intel HD Graphics, triangle count and GLSL

10-28-2011, 05:36 AM
I understand that generally OpenGL and Intel hardware is a lost cause but I'll still ask if anyone has had similar experiences.

In my engine, which supports both Direct3D9 and OpenGL 2.0+, I'm seeing catastrophically bad performance in OpenGL mode on an Intel HD Graphics -equipped laptop, when triangle count increases let's say above 10000. When rendering only a small amount of triangles / vertices, the performance stays good, even though the fragment shaders are fairly complex (normal mapping, specular, PSSM shadows).

I am using GLSL and vertex buffer objects. On NVIDIA & AMD hardware OpenGL mode is only slightly slower than Direct3D9, which is as expected.

So, anyone else seen such dramatic performance drop related to triangle count on Intel hardware, and did you manage to solve it?

10-31-2011, 08:44 AM
You can perform an experiment with glEnable(GL_RASTERIZER_DISCARD) to see if performance is FS bound. If it is fs bound You can try to optimize Your fragment shaders and try to disable some fragment operations, as some of them are unimplemented in hw.

In some cases Intel cards with pre 15.22 drivers will have performance bound to VS, as OpenGL driver will punt vertex shaders to run on CPU. You should ensure You have latest drivers to cope this.

11-01-2011, 09:29 AM
Thanks for the suggestion! Executing the vertex shader on CPU would indeed look like a plausible cause, though I did try updating the driver to the newest available (, and the problem remained.