PDA

View Full Version : What to Recommend for Intel Graphics???



Janika
05-22-2012, 09:55 AM
As I'm making some minimum system recommendation for a 3D demo, what version of OpenGL should I blindly trust on Intel integrated graphics (within range of 10 years old)? Regardless of the capabilities I will utilize I will have to restrict it to the least version that can run on Intel graphics safely without bugs or weird effects.

Thanks.

thokra
05-22-2012, 10:13 AM
what version of OpenGL should I blindly trust on Intel integrated graphics (within range of 10 years old)?

In general, you shouldn't blindly trust any GL implementation. If you do so and release software without thorougly testing your renderer you can get into trouble depending on what features you use. Intel has a tradition of not being the pinnacle of GL conformance and reliability, so especially when it comes to Intel you should be suspicious. From what I've seen so far, the Sandy Bridge and probably Ivy Bridge chips seem to handle most of the supported GL subset well enough for basic usage. However, I cannot guarantee anything beyond simple demos.

Janika
05-22-2012, 10:21 AM
I was thinking to restrict it to version 1.2 with no shaders. But since high performance is not an issue I could use Mesa3D as an option on Intel's. I think it has some basic acceleration which should be enough.

Janika
05-23-2012, 12:08 PM
In general, you shouldn't blindly trust any GL implementation.

Sorry. I meant by "blindly trusting" an implementation is its reliability to some degree that matches lets say Direct3D drivers reliability on the same hardware. So I don't expect a 100% bug-free implementation.


Intel has a tradition of not being the pinnacle of GL conformance and reliability, so especially when it comes to Intel you should be suspicious.

And that's why some CAD software requirements exclude Intel explicitly. Should I do the same? The main problem with this if someone has a decent new laptop with integrated Intel graphics, there's a high chance it may work with a minimal GL version...

Now if I'm limiting the demo to NVIDIA and ATI only, where can I get a list of each hardware with its last supported GL version?

Thanks.

thokra
05-23-2012, 12:44 PM
I meant by "blindly trusting" an implementation is its reliability to some degree that matches lets say Direct3D drivers reliability on the same hardware.

I think this isn't true for all major companies. With D3D drivers vendors need to pass what Microsoft set as a quality standard if the want WHQL certification. For OpenGL there is no such thing.


Now if I'm limiting the demo to NVIDIA and ATI only, where can I get a list of each hardware with its last supported GL version?

I think you would have to go back beyond a decade to find hardware of the two which doesn't support GL 1.2.

What features do you intend to use?

Janika
05-23-2012, 12:53 PM
What features do you intend to use?

No eye-candy effects, just basic multi-texturing, and I prefer using shaders for multi-texturing and lighting.
I'm also good without vertex buffer objects.

thokra
05-23-2012, 01:00 PM
and I prefer using shaders for multi-texturing and lighting.

In that case a GLSl supporting version, i.e. minimally GL 2.0 is advisable. In any case I suggest you try the code on as many platforms as you can get your hands on. You can't trust a program to run fine on NVIDIA, just because it does so on an AMD system.

Alfonse Reinheart
05-23-2012, 01:25 PM
Now if I'm limiting the demo to NVIDIA and ATI only, where can I get a list of each hardware with its last supported GL version?

I would not trust anything but the most basic OpenGL code for GL 2.1 and below. NVIDIA still supports the GeForce 6xxx line (note the number of x's). ATI doesn't support anything before the HD series. So all of ATI's supported hardware is 3.3-capable.

The problem is that ATI left their 2.x hardware in a less-than-good state. While their drivers have become much better of late, their drivers in those days were terrible. And since that hardware isn't being supported anymore, then problems arise when you try to do something with it.


You can't trust a program to run fine on NVIDIA, just because it does so on an AMD system.

Actually it's more likely to work than the other way around. NVIDIA's drivers are far more permissive than AMD's.

thokra
05-23-2012, 02:02 PM
Actually it's more likely to work than the other way around. NVIDIA's drivers are far more permissive than AMD's.

Yeah, definitely the wrong order. ;) However, my point was to emphasize that testing on a single platform is never enough to reach a certain level of quality.

BTW: I wouldn't call it permissive as they somtimes reflect the spec incorrectly. So faulty might be a better term. :)

mhagain
05-23-2012, 04:37 PM
For within range of 10 years you're likely talking about OpenGL 1.1 or 1.2 maximum, and with the D3D option not looking much better either. I personally think you're being over-conservative with this, so if you're willing to narrow it to 6 or so years things get a LOT better - by the 9xx series Intel capabilities were actually getting pretty decent and much of them are even exposed via their GL driver.

So, looking at the 9xx series as a baseline, you'll have OpenGL 1.4 plus a bunch of interesting extensions - GL_ARB_vertex_buffer_object (emulated in software on the earlier models), GL_ARB_vertex_program (likewise) and GL_ARB_fragment_program - broadly equivalent to D3D9 with shader model 2 (which also works quite well on them) - but note - we're talking assembly shaders rather than GLSL here. These are all available and will even work well and fast - despite the software emulation on the per-vertex side you should easily hit or exceed 60 fps with Quake or Quake II level graphics.

In other capabilities you'll have 8 texture units, support for up to 1024x1024 texture sizes at a minimum, 128mb of video memory (shared, of course), and full, general multitexture capability. I'm not certain if point sprites are exposed via their GL driver (they are on their D3D driver though).

More recent models are better still, with the ever-present caveat that it's Intel graphics.

So you still have to tread a little carefully with them - a good general rule of thumb is that if functionality isn't exposed in the D3D driver for a given model, then don't even attempt to use it in OpenGL - even if the GL spec says that you should be able to.

Janika
05-23-2012, 10:28 PM
Great! So as a compromise it's better to narrow it down to lets say within the range of five year old hardware and limit the functionality to the greatest common subset of features between GL 2.1 and D3D9.

kRogue
05-24-2012, 04:16 AM
I think saying GL2.1 is likely the most reasonable approach when wanting to release a demo that would sort-of-"just work". I say sort of, simply because Intel drivers have been bad are bad and likely will continue to be bad. GL2.1 corresponds on NVIDIA about to GeForce 6xxx (lets not talk about GeForce FX sort-of-support for GL2.1, ok) and that card was... from 2004 .

If you are desperate you could try http://code.google.com/p/angleproject/ which is ANGLE project which is essentially GLES2 via D3D.... my opinion is that is not really a great option, but desperation rules when dealing with Intel Graphics

Janika
05-24-2012, 07:24 AM
I'm kind of aware of the Intel's OpenGL problem that they don't support the most recent versions, and as I understand this is merely due to hardware limitations. Now it seems the problem is more about driver poor quality regardless of the version supported. I'm wondering why such a giant company that creates the best silicon brains :D along with various tools and drivers for their products, cannot get the OpenGL part done right or at least to make it match the quality of other competitors?

thokra
05-24-2012, 07:55 AM
[..]cannot get the OpenGL part done right or at least to make it match the quality of other competitors.

I don't think that making good CPUs implies making good GPUs and OpenGL implementations. Intel has never produced a dedicated GPU and I'm sure that the integrated chips have never been used in any hardcore graphics application at any time. And I don't count CAD apps as hardcore graphics applications. ;) They have been in the ARB for quite some time though.

Alot of real applications need more performance than you get with integrated GPUs. This in turn limits the potential amount of customers and developers sending in bug reports. Even Intel can only test so much and I don't know if they have an internal conformance test suite for OpenGL (although it would be very wise because there is no standardized one). And even if they receive a number of valuable reports, fixing the bugs and actually releasing a more stable driver is another story.

Alfonse Reinheart
05-24-2012, 08:16 AM
It's not about "cannot get the OpenGL"; it's about "don't care about OpenGL."

OpenGL support is just a blurb for the box. As long as it runs Minecraft, some IdTech-engine, and a couple of other things, that's all the "OpenGL support" that you really need.

Intel's drivers generally suck, whether OpenGL or D3D. They've gotten better of late, but they're still pretty weak. Their D3D implementation is workable, but not nearly as solid as their competitors. And since GL drivers require more work, they will naturally be less stable. And since so few popular applications actually use OpenGL (especially some of the newer stuff), it simply never gets tested.

That's why I wish the ARB would get that test suite done.

thokra
05-24-2012, 08:51 AM
That's why I wish the ARB would get that test suite done.

Do we have any information on its status?

Janika
05-24-2012, 11:00 AM
Is theoretically possible to have a test suite that can detect driver bugs? OpenGL command ordering can result in different results due to a bug, then how a test case can uncover such command ordering problems? I imagine a test suite is a set of GL programs that are expected to produce certain images.

Alfonse Reinheart
05-24-2012, 11:22 AM
OpenGL command ordering can result in different results due to a bug, then how a test case can uncover such command ordering problems?

What does command ordering have to do with anything?

OpenGL isn't, and doesn't try to be, pixel accurate. A test suite only needs to test conformance with the specification. And that's fairly easy in a lot of cases. At the very least, each function command should be tested to see if it produces errors when it should. Shader functions should be tested to show if they produce appropriate values and so forth.

Janika
05-24-2012, 12:21 PM
State change functions which are supposed to work independent of call order sometimes and because of a driver bug can produce different results. It's not per every possible combination of calls, it's sometimes swapping two functions causes the problem. I had this issue before with ATI fixed functionality until a driver update that fixed the bug.
So the conformance test is just to verify the GL version compliance, not the quality of driver and bugs.

thokra
05-24-2012, 03:52 PM
So the conformance test is just to verify the GL version compliance, not the quality of driver and bugs.

IMHO, this absolutely isn't mutually exclusive. In fact it's quite the opposite: GL compliance is one of the most important aspects of a quality driver. If you're implementation is basically bug free but doesn't behave as expected by developers because it doesn't reflect the spec, then the whole thing is still worse than a driver which is 100% compliant but has some bugs. I'd rather have a driver which I can test and submit bug reports if I see something than an implementation which doesn't behave correctly, then tricks you into believing that everything is fine just to see it not working correctly on another platform.

Janika
05-24-2012, 05:28 PM
Let me give an example I experienced on some hardware. Running a standard code that uses a very basic functionality to render textured quads, resulted in wrong colors and shades, even textures were not showing. Sometimes black screen background, when cube mapping is used. Then how would a conformance test be able to detect such bugs? What you saying is a conformance test that checks the interface, not the implementation details that can be as low as setting or feeding the wrong values to hardware registers, which is not detectable by just being compliance with the specification.
Maybe two-stage test will do it, where the first is the conformance or API verification. The second stage is to check the result of a test-case program with a pre-rendered image. This is not a per-fragment check. With approximation algorithms a good test package can verify that both images are close so that the driver is not producing weird colors from Mars :D

thokra
05-25-2012, 12:59 AM
on some hardware

In what year and on which GPU did you test this?


[..]very basic functionality to render textured quads, resulted in wrong colors and shades, even textures were not showing.

Depending on your answer to the above I think I'm gonna have a hard time believing that this wasn't your fault. ;) No but seriously, per vertex colors and interpolation wasn't working? I believe that's very unlikely even with Intel hardware.


What you saying is a conformance test that checks the interface, not the implementation details that can be as low as setting or feeding the wrong values to hardware registers, which is not detectable by just being compliance with the specification.

You don't need to check the internals of functions. White-box tests are usually done by people who develop functions and know the internals of functions, i.e. GL implementors. Like Alfonse already established a conformance test suite needs to,


At the very least, (test) each function command [..] to see if it produces errors when it should.

This is classic black-box testing. Put something in and check the output, in this no errors or errors which are either correct or false according to the spec. Alfonse goes on to state that


Shader functions should be tested to show if they produce appropriate values[..]

Also black-box testing. A texture lookup falls right into this category. You look up some value in a texture who's value you know exactly and see if the texture look-up produces the correct result.

Of course, you can't check GPU registers, but ultimately what we as developers care about is correct output values when correct input values are supplied. Regarding correctness, to us it doesn't matter how the implementation handles stuff internally. Performance is written on another sheet but here we as powerless as with API conformance. ;)

Alfonse Reinheart
05-25-2012, 02:06 AM
Running a standard code that uses a very basic functionality to render textured quads, resulted in wrong colors and shades, even textures were not showing. Sometimes black screen background, when cube mapping is used. Then how would a conformance test be able to detect such bugs?

It would depend on what the bug was that caused the "black screen background".

A conformance test cannot be absolutely comprehensive. But it would certainly be better than what we have now (ie: nothing).

Janika
05-25-2012, 07:32 AM
Depending on your answer to the above I think I'm gonna have a hard time believing that this wasn't your fault. No but seriously, per vertex colors and interpolation wasn't working? I believe that's very unlikely even with Intel hardware.

It happened with some Intel ancient laptop which claims supporting version GL 2.0. the problem was with using cube maps, the textures turn into black color. Another bug when the hardware somehow decides to modulate a texture with pink color :D However same code worked fine with NVIDIA and ATI that supported GL 2.1. This is why I'm not sure if I was doing something wrong, which is possible. But then how come it worked with other hardware?

Another problem with performance. For instance on ATI Radeon x1600 with OpenGL 2.0 drivers, I got a significant performance gain when executing the fixed-functionality rendering path, even using glBegin/glEnd. While when using shaders it becomes noticeably slower, and with VBOs, it's almost dead as if it was running in software mode, until I did a tweak suggested on these forums. This suggestion was not mentioned in any reference or the hardware OpenGL optimization tips. It was by experience. I had to use glBufferSubData instead of glMap/unmap buffer.

Unfortunately this hardware is not anymore supported by AMD so I canot verify if the problem would have been resolved with a new driver update.

My conclusion is to use "removed" functionality and fixed pipeline for anything that supports up to 2.1.
But with the new hardware that has GL >= 3.x I expect the 2.1 support is something legacy and undeveloped so I better have to rely on GL > 3.x core. This implies writing two rendering paths for the same application. With good abstraction design the application can check for the highest GL version supported and dynamically link to the appropriate GL engine module.

thokra
05-25-2012, 07:53 AM
But with the new hardware that has GL >= 3.x I expect the 2.1 support is something legacy and undeveloped so I better have to rely on GL > 3.x core.

It depends. Both AMD and NVIDIA made a pledge to fully support the compatibility profile. This means that GL2.1- features are supposedly fully supported even with current drivers. You should be able to yank some heavy legacy code even through a Keppler GPU. However, if you can, I guess most people here will suggest you go for GL 3.3 (or GL3.x for that matter).