State of Windows Intel OpenGL drivers

Hi everyone,

as I don’t have available hardware to make tests, I was wondering if anyone around here has experience with OpenGL applications on Windows with Intel graphics chips ?

From what I understand they support OpenGL 2.1 but not 3.x.

Are the drivers stable enough for real world use (i.e. more than a spinning cube) ?

Cheers,

I tell you my professional experience with Intel graphics. Even though we kept our code limited to very basic stuff, no shaders and stuck to GL 1.4 as the driver supports, we used to have very instability issues such as the famous PINK texture :slight_smile: This was enough to switch to Direct3D 9 instead and it just worked! very solid and stable. and yeah we want to support laptops with built in graphics because that’s what most of our customers have or prefer to use. :slight_smile:

My experience with Intel drivers goes back from 2 - 3 years ago and it was just awful. In real world, I had to wrote a dedicated OpenGL 1.1 code path to make the software run “correctly”. (a quite complex software). Even VBO were dis-functional.

Intel drivers report all the extensions for OpenGL 3.0, that’s doesn’t mean it’s reliable… (at all).

I would enjoy to read about more recent experiences on that regard.

Recent experience here; they’re not too bad but they’re not good either; don’t expect performance that’s going to be any way competitive with a dedicated GPU. You absolutely have to use native texture formats (GL_BGRA and nothing else) or you’re screwed. They advertise OpenGL 2.1 support in more recent models but it seems incomplete so don’t rely on stuff like NPOT textures being fully available. Don’t try writing to the front buffer. If you can’t do it in D3D you shouldn’t even attempt it in OpenGL, even if the spec says that it’s supported for the GL_VERSION you’re coding to.

Many OEMs ship drivers without any OpenGL support at all, and lock the OS so that you can only install their driver on it. You can get round this but it’s mildly awkward for the user. Last time I checked both Dell and Toshiba were offenders here, so that’s a huge chunk of the business PC/home “multimedia” PC/laptop market covered. I don’t know if HP do this or not.

A recent Intel can run something like Doom 3 at low resolution, low detail and low to semi-playable framerates (20 or so). Less scene complexity, no shaders, less stress on the graphics subsystem and you can easily hit 60 FPS with moderately complex stuff. To be honest though you’re really going to get better D3D9 support from them, and unless you absolutely must use OpenGL (and don’t mind pissing off many of your potential customers) I’d recommend just using D3D9 instead.

Old experience with Intel/OpenGL (i865 and the like): blue screen.

Recent experience: mhagain nailed it. It generally works, until you stray from the beaten path. Don’t try to share contexts, either.

Moreover, T&L performance is atrocious and vertex texture fetch is not supported, so expect roughly 2003-level capabilities from your 2011 hardware (great work, Intel!) Fragment shading is moderately - as long as you only use 512x512 compressed textures (anything more and frametimes will start to soar).

Finally, their shader compiler is very buggy. Forget about structures, arrays, loops and indexing - all will bomb in different, exciting ways. Use plain uniforms, unroll your loops and you should be mostly fine.

Falling back to D3D9 is a workaround but, for the love of anything that’s holy, make your discontent known to Intel if you do that. Otherwise, they are sucking you into paying the bill for their inability to write proper drivers.

@mhagain: Sony also locks Intel’s drivers…

That’s normal.
OpenGL is one complex beast.

Normal?
Intel has been around since the beginning. Intel is so fat that, they could buy AMD and nVidia if no law were preventing that. Intel has 50% of the graphics chips market.

There is nothing normal about that.

When dealing with Intel GPU’s it is critical to also know which GPU from Intel. For the older generation of Intel GPU’s, the vertex stage is done in CPU. New Intel GPU’s will supposedly have GL3.x support… but going by Intel’s horrible track record in the land of drivers, I would not hold one’s breath [and the D3D drivers leave much to be desired too actually].

One thing that using an Intel GPU OpenGL is good for: it gives a preview of how well your GL code (after you “port” it to GLES2 or GLES1) will run on embedded GPU’s: PowerVR’s SGX family and ARM’s Mali family [nVidia Tegra is in a completely different (and higher) class].

Keep in mind that the Intel GPU only adds something like 5 USD (I think) so… one gets what one pays for. The world of PC graphics would be so much better without Intel GPU’s… but they are cheap.

I’d expand this out to be a more general point: if your code runs and runs well on Intel, you can be pretty confident that it will run and run well on anything. In fact I always keep a machine with an Intel around for this purpose; it’s great for testing stuff on and for identifying potential trouble spots and bottlenecks that don’t show up so well on real hardware.

PowerVR’s SGX family and ARM’s Mali family [nVidia Tegra is in a completely different (and higher) class].

Slightly off-topic: do you have performance numbers for comparing PowerVR GPUs to Tegra (preferably from a neutral party)? I haven’t been able to find any, and I’d love to see some.

Ermm… My problem with OpenGL on Intel is the drivers implementation, extremely buggy. I don’t think that anyone could say that PowerVR drivers are buggy (or that much buggy) especially on mobile performs.

Regarding performances, Intel has used PowerVR SGX 535 for atom based platform, (same design that iPhone 3GS and 4 but clock at a high speed), but it doesn’t compete it term of performance, only in term for power.

Finally, in term of architecture, the PowerVR SGX has little to compare with Intel chips. I find it really naive to use Intel platform to “preview of how well your GL code (after you “port” it to GLES2 or GLES1) will run on embedded GPU’s”.

I’ve written GLES2 (and 1) code for both PowerVR SGX (Nokia N900) and Arm Mali (prototype hardware).

Right off the bat: the GL drivers are still kind of, at best iffy and buggy… ARM much worse than PowerVR. The GLSL compilers on both are not very good compared to the desktop. Indeed, no attempt is made by the compiler to reorder instructions, etc. The GL implementation themselves are, well, be careful is the only thing I can say. The bugs can be “interesting”. Moreover, GLES2 has some particular odd things about it which make it irritating to use (for example NPOT is in GLES2 core without mipmapping and limited texture wrap modes, limitation is lifted by an extension but glGenerateMipmap does not work on NPOT even with that extension)

Groovounet is completely correct when he says that the the arch of PowerVR and Mali are nothing like that of Intel’s GPUs… indeed the arch of Tegra, PowerVR and Mali are all very different:

  • [li] Tegra: traditional rasterizer. I think of it as a “mini-GeForce6” This is not really true, but is a good starting point for how to deal with it[] Mali is a tiled renderer… basically when one does glDrawStuff, it builds a list of polygons, per tile. At eglSwapBuffers, the GPU walks the tiles and then rasterizes. MALI is ok with discard, but really hates long skinny triangles. Generally speaking, drawing front to back, like traditional rasterizers, is a win under Mali. [] PowerVR is a “differed tiled renderer”… is also a tiled render that builds a polygon list per tile, but the way the chip works it attempts to only run one fragment shader per pixel. Works well as long as one never does “discard” … the situation with blending is more complicated, but for such GPU’s blending is MUCH better for performance than discard. For PowerVR, I suspect, but benchmarking it is not easy, there is little or possible nothing to gain by drawing front to back.

For both Mali and PowerVR, use a texture, then changing it’s contents than using it again in the same frame are bad for performance. Framebuffer object switches are expensive, as they typically force a walk of the tiles. I have no idea how the PowerVR SGX 545 is going to have reasonable performance when it comes to occlusion query (as that is GL3.x core) within a frame. For reference, on the tiled renderers, a tile is a small portion of the framebuffer, when the GPU renders, it renders to SRAM (i.e. RAM on the CPU silicon) and the copies the tile out to the framebuffer. This is why in egl there is a flag to say about a eglsufrace to not care if the contents of a surface are preserved, because if the contents do not need to be preserved, then the implementation can skip the copy from RAM to SRAM, saving bandwidth.

Moreover, Groovounet is completely right when he says:

It is naive, and in all honesty I was being a little hyperbolic… I would no way say that tuning for Intel GPU’s will tell you how well your code will work on device, rather Intel GPU’s and drivers give the true lowest common denominator, i.e. the worst of worst. However there are some important bits. Firstly, in the embedded world, typically, but not always, the GPU and CPU are sitting on the same silicon, the SoC (System on Chip) along with lots of other bits. Secondly, the memory architecture is almost always unified. The memory band width issue is a big deal on these gizmos, that is why in the embedded world tiled renderers are quite common… I think NVIDIA’s Tegra is the only embedded non-tiled renderer for embedded and NVIDIA had to work some magic on the memory controllers and bandwidth issues.

It’s worth noting that the same also applies to Intel.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.