What to Recommend for Intel Graphics?

As I’m making some minimum system recommendation for a 3D demo, what version of OpenGL should I blindly trust on Intel integrated graphics (within range of 10 years old)? Regardless of the capabilities I will utilize I will have to restrict it to the least version that can run on Intel graphics safely without bugs or weird effects.

Thanks.

what version of OpenGL should I blindly trust on Intel integrated graphics (within range of 10 years old)?

In general, you shouldn’t blindly trust any GL implementation. If you do so and release software without thorougly testing your renderer you can get into trouble depending on what features you use. Intel has a tradition of not being the pinnacle of GL conformance and reliability, so especially when it comes to Intel you should be suspicious. From what I’ve seen so far, the Sandy Bridge and probably Ivy Bridge chips seem to handle most of the supported GL subset well enough for basic usage. However, I cannot guarantee anything beyond simple demos.

I was thinking to restrict it to version 1.2 with no shaders. But since high performance is not an issue I could use Mesa3D as an option on Intel’s. I think it has some basic acceleration which should be enough.

In general, you shouldn’t blindly trust any GL implementation.

Sorry. I meant by “blindly trusting” an implementation is its reliability to some degree that matches lets say Direct3D drivers reliability on the same hardware. So I don’t expect a 100% bug-free implementation.

Intel has a tradition of not being the pinnacle of GL conformance and reliability, so especially when it comes to Intel you should be suspicious.

And that’s why some CAD software requirements exclude Intel explicitly. Should I do the same? The main problem with this if someone has a decent new laptop with integrated Intel graphics, there’s a high chance it may work with a minimal GL version…

Now if I’m limiting the demo to NVIDIA and ATI only, where can I get a list of each hardware with its last supported GL version?

Thanks.

I meant by “blindly trusting” an implementation is its reliability to some degree that matches lets say Direct3D drivers reliability on the same hardware.

I think this isn’t true for all major companies. With D3D drivers vendors need to pass what Microsoft set as a quality standard if the want WHQL certification. For OpenGL there is no such thing.

Now if I’m limiting the demo to NVIDIA and ATI only, where can I get a list of each hardware with its last supported GL version?

I think you would have to go back beyond a decade to find hardware of the two which doesn’t support GL 1.2.

What features do you intend to use?

What features do you intend to use?

No eye-candy effects, just basic multi-texturing, and I prefer using shaders for multi-texturing and lighting.
I’m also good without vertex buffer objects.

and I prefer using shaders for multi-texturing and lighting.

In that case a GLSl supporting version, i.e. minimally GL 2.0 is advisable. In any case I suggest you try the code on as many platforms as you can get your hands on. You can’t trust a program to run fine on NVIDIA, just because it does so on an AMD system.

Now if I’m limiting the demo to NVIDIA and ATI only, where can I get a list of each hardware with its last supported GL version?

I would not trust anything but the most basic OpenGL code for GL 2.1 and below. NVIDIA still supports the GeForce 6xxx line (note the number of x’s). ATI doesn’t support anything before the HD series. So all of ATI’s supported hardware is 3.3-capable.

The problem is that ATI left their 2.x hardware in a less-than-good state. While their drivers have become much better of late, their drivers in those days were terrible. And since that hardware isn’t being supported anymore, then problems arise when you try to do something with it.

You can’t trust a program to run fine on NVIDIA, just because it does so on an AMD system.

Actually it’s more likely to work than the other way around. NVIDIA’s drivers are far more permissive than AMD’s.

Actually it’s more likely to work than the other way around. NVIDIA’s drivers are far more permissive than AMD’s.

Yeah, definitely the wrong order. :wink: However, my point was to emphasize that testing on a single platform is never enough to reach a certain level of quality.

BTW: I wouldn’t call it permissive as they somtimes reflect the spec incorrectly. So faulty might be a better term. :slight_smile:

For within range of 10 years you’re likely talking about OpenGL 1.1 or 1.2 maximum, and with the D3D option not looking much better either. I personally think you’re being over-conservative with this, so if you’re willing to narrow it to 6 or so years things get a LOT better - by the 9xx series Intel capabilities were actually getting pretty decent and much of them are even exposed via their GL driver.

So, looking at the 9xx series as a baseline, you’ll have OpenGL 1.4 plus a bunch of interesting extensions - GL_ARB_vertex_buffer_object (emulated in software on the earlier models), GL_ARB_vertex_program (likewise) and GL_ARB_fragment_program - broadly equivalent to D3D9 with shader model 2 (which also works quite well on them) - but note - we’re talking assembly shaders rather than GLSL here. These are all available and will even work well and fast - despite the software emulation on the per-vertex side you should easily hit or exceed 60 fps with Quake or Quake II level graphics.

In other capabilities you’ll have 8 texture units, support for up to 1024x1024 texture sizes at a minimum, 128mb of video memory (shared, of course), and full, general multitexture capability. I’m not certain if point sprites are exposed via their GL driver (they are on their D3D driver though).

More recent models are better still, with the ever-present caveat that it’s Intel graphics.

So you still have to tread a little carefully with them - a good general rule of thumb is that if functionality isn’t exposed in the D3D driver for a given model, then don’t even attempt to use it in OpenGL - even if the GL spec says that you should be able to.

Great! So as a compromise it’s better to narrow it down to lets say within the range of five year old hardware and limit the functionality to the greatest common subset of features between GL 2.1 and D3D9.

I think saying GL2.1 is likely the most reasonable approach when wanting to release a demo that would sort-of-“just work”. I say sort of, simply because Intel drivers have been bad are bad and likely will continue to be bad. GL2.1 corresponds on NVIDIA about to GeForce 6xxx (lets not talk about GeForce FX sort-of-support for GL2.1, ok) and that card was… from 2004 .

If you are desperate you could try Monorail - angleproject - ANGLE: Almost Native Graphics Layer Engine - Monorail which is ANGLE project which is essentially GLES2 via D3D… my opinion is that is not really a great option, but desperation rules when dealing with Intel Graphics

I’m kind of aware of the Intel’s OpenGL problem that they don’t support the most recent versions, and as I understand this is merely due to hardware limitations. Now it seems the problem is more about driver poor quality regardless of the version supported. I’m wondering why such a giant company that creates the best silicon brains :smiley: along with various tools and drivers for their products, cannot get the OpenGL part done right or at least to make it match the quality of other competitors?

[…]cannot get the OpenGL part done right or at least to make it match the quality of other competitors.

I don’t think that making good CPUs implies making good GPUs and OpenGL implementations. Intel has never produced a dedicated GPU and I’m sure that the integrated chips have never been used in any hardcore graphics application at any time. And I don’t count CAD apps as hardcore graphics applications. :wink: They have been in the ARB for quite some time though.

Alot of real applications need more performance than you get with integrated GPUs. This in turn limits the potential amount of customers and developers sending in bug reports. Even Intel can only test so much and I don’t know if they have an internal conformance test suite for OpenGL (although it would be very wise because there is no standardized one). And even if they receive a number of valuable reports, fixing the bugs and actually releasing a more stable driver is another story.

It’s not about “cannot get the OpenGL”; it’s about “don’t care about OpenGL.”

OpenGL support is just a blurb for the box. As long as it runs Minecraft, some IdTech-engine, and a couple of other things, that’s all the “OpenGL support” that you really need.

Intel’s drivers generally suck, whether OpenGL or D3D. They’ve gotten better of late, but they’re still pretty weak. Their D3D implementation is workable, but not nearly as solid as their competitors. And since GL drivers require more work, they will naturally be less stable. And since so few popular applications actually use OpenGL (especially some of the newer stuff), it simply never gets tested.

That’s why I wish the ARB would get that test suite done.

That’s why I wish the ARB would get that test suite done.

Do we have any information on its status?

Is theoretically possible to have a test suite that can detect driver bugs? OpenGL command ordering can result in different results due to a bug, then how a test case can uncover such command ordering problems? I imagine a test suite is a set of GL programs that are expected to produce certain images.

OpenGL command ordering can result in different results due to a bug, then how a test case can uncover such command ordering problems?

What does command ordering have to do with anything?

OpenGL isn’t, and doesn’t try to be, pixel accurate. A test suite only needs to test conformance with the specification. And that’s fairly easy in a lot of cases. At the very least, each function command should be tested to see if it produces errors when it should. Shader functions should be tested to show if they produce appropriate values and so forth.

State change functions which are supposed to work independent of call order sometimes and because of a driver bug can produce different results. It’s not per every possible combination of calls, it’s sometimes swapping two functions causes the problem. I had this issue before with ATI fixed functionality until a driver update that fixed the bug.
So the conformance test is just to verify the GL version compliance, not the quality of driver and bugs.

IMHO, this absolutely isn’t mutually exclusive. In fact it’s quite the opposite: GL compliance is one of the most important aspects of a quality driver. If you’re implementation is basically bug free but doesn’t behave as expected by developers because it doesn’t reflect the spec, then the whole thing is still worse than a driver which is 100% compliant but has some bugs. I’d rather have a driver which I can test and submit bug reports if I see something than an implementation which doesn’t behave correctly, then tricks you into believing that everything is fine just to see it not working correctly on another platform.