Strange Intel Driver

Hi,

I’ve come across the following hardware/software:
renderer: Mobile Intel® 4 Series Express Chipset Family
vendor : Intel
version : 2.1.0 - Build 6.14.10.5268

texture image units: 16
texture coords: 8
legacy texture units : 8
max. texture dimensions: 4096x4096

GLSL support:
version: 1.20 - Intel Build 6.14.10.5268

I was positively surprised by the extensive extension list, but I found it a little strange which extensions were listet and some were missing that I took for granted.

These extension are (among many others) in the list:

GL_EXT_framebuffer_object
GL_EXT_framebuffer_blit

GL_ARB_framebuffer_object
GL_ARB_framebuffer_sRGB
WGL_ARB_framebuffer_sRGB

Notice that EXT_framebuffer_multisample and EXT_framebuffer_sRGB are missing although the needed functionality is included in ARB_framebuffer_object and GL_ARB_framebuffer_sRGB.

This behaviour is perfectly legal. But in practise I always chose the EXT version over ARB or core, since I wanted to be conservative in order to support older hard/software. This is the first time, I see it the other way around :slight_smile:

Also, since the core version is >1.3, GL_ARB_multisample is in core, BUT WGL_ARB_multisample ought to be still listed as WGL extension! This seems to be a bug then… but how do I handle it? I could try to use multisampling blindly or just don’t support it :-/

Is there similar “real world” hard/software out there that behaves as strange as this Intel card?

My fortunately limited exposure to some Intel cards exposed quirks starting from:
max. texture dimensions: 4096x4096
(while actually it was 512x512)

Manually checking everything, the caps were ALL lying.

ARB_multisample (or GL >= 1.3) means that the GL API for multisampling is present. It does not imply the capability to create a multisampled drawable. That is part of the window system.

(The ARB_multisample spec doesn’t require a minimum number of samples, unlike GL 3.0.)

ARB_framebuffer_object allows you to query the maximum supported number of samples for a renderbuffer (GL_MAX_SAMPLES.) This is also allowed to be zero, prior to GL3.0.

If you refer to GL_MAX_TEXTURE_SIZE than its not all that clear.
This one is specified in a pretty strange way.
From my understanding of spec this value is pretty much useless as it only means that textures with side > value, should be errored.

If you got that from proxies (duh, these are a pain to use …) than its different story.

It’s understandable when VRAM can’t contain the texture, but not supporting 1024x1024, when coming on a mobo for core2duo?
Anyway, the driver was reporting it supports NPOT, while it didn’t; and likewise was lying about everything.

This behaviour is perfectly legal. But in practise I always chose the EXT version over ARB or core, since I wanted to be conservative in order to support older hard/software. This is the first time, I see it the other way around :slight_smile:

This may not be a good idea; while functionally there should in theory be little or no difference, there may be bugfixes or subtle implementation differences in them. The policy for extension approval and acceptance will always mean that core and ARB are more widespread, better tested and more consistent in behaviour on different drivers.

My preferred route for extension checking is core/fallback to ARB/fallback to EXT and don’t bother with vendor-specific.

The Intel 4 Series isn’t too shabby a part; it certainly does have some strange behaviour in some places, but overall it’s capable of quite respectable performance, and I’m currently doing some work on one that’s proving to be quite enjoyable. It is however a much better part with D3D than with OpenGL.

Overall I like doing a certain amount of development and testing on this kind of part as it tends to catch suboptimal code paths and assumptions I may make about the availability of functionality fairly well. I come out of it knowing that if my code runs well and stable on something like this, there’s a pretty good chance of it running well and stable on just about anything.