Apple capability matrix updated for MacOS 10.7

I hope nobody had the weird idea on invest in a MacBook using a HD3000 because you are stock to OpenGL 2.1.

http://developer.apple.com/graphicsimaging/opengl/capabilities/GLInfo_1070_Core.html

I seems that Apple choice was to build a new drivers, only running OpenGL 3.2. There are extensions but none which are part of the OpenGL 3.2 core profile. Clear and nice cut.

That’s highly annoying. No explicit attributes. No texture_sRGB (needed to have S3TC with sRGB). It’s better than it was, but not as good as it ought to be.

Both EXT_texture_sRGB and ARB_framebuffer_sRGB are supported under GL 3.2 (through the core API path)
as well as GL 2.x (via extensions on supported renderers) on this platform.

EXT_texture_sRGB contains enums for DXT compressed sRGB textures. Which are most assuredly not supported via core 3.2. Go ahead, check the spec; you won’t find GL_COMPRESSED_SRGB_S3TC_DXT1 anywhere in the specification.

So effectively, they took out support for sRGB S3TC compressed textures.

sRGB DXT formats are still supported. They’re in the gl3ext.h and they work.

sRGB DXT formats are still supported. They’re in the gl3ext.h and they work.

And since no extension is advertised for it, and it’s not part of core, it is therefore a driver bug that it works. Calling glCompressedTexImage2D with GL_COMPRESSED_SRGB_S3TC_DXT1_EXT as the internal format should give a GL_INVALID_ENUM error.

According to this article, GL4.1 on OSX might not be too far away (see “OpenGL 3.2”, about halfway down) – at least for recent AMD and Nvidia equipped systems, that is. The fact that the new Macbook Airs only have Intel GPUs is a little disappointing on this front. Two steps forward, one back :slight_smile:

Define “not too far away” :stuck_out_tongue:

More seriously, it takes time to write drivers and I can’t picture OpenGL 4.1 on mac in even a year… but I wish!

I trust the guys at ArsTech, so if they say it isn’t too far away, I’m willing to buy it.

However, that says nothing about EXT_texture_sRGB and compressed sRGB textures.

The fact that the new Macbook Airs only have Intel GPUs is a little disappointing on this front.

The thing is, Intel’s HD3000 GPU can handle OpenGL 3.2. Or 3.3. There’s no reason it can’t. The only reason they seem stuck at 3.1 is because Intel sucks. I would have thought that if anyone could force Intel to write functioning drivers, it would be Apple.

I guess some things are beyond even Steve Jobs. Or maybe Intel is upset about that whole “we’re making fat loads of cash off ARM chips with iOS!” thing…

The weirdest puzzle is why 3.2 and not 3.3, since there isn’t any hardware requirement between them. 3.3 allows the shaders to use in/inout/out instead of attribute/varying/uniform keywords, which is exactly what we need for OpenGL ES 2.0. So we’re still stuck using 2 different shaders, or using definitions to get GLSL keywords.

3.3 allows the shaders to use in/inout/out instead of attribute/varying/uniform keywords

That was done in 3.0. And GLSL 1.50 didn’t get rid of “uniform”. And “inout” is not usable on variables, only on argument lists.

which is exactly what we need for OpenGL ES 2.0.

ES 2.0 uses attribute and varying. It doesn’t allow for in/out.

The thing is, Intel’s HD3000 GPU can handle OpenGL 3.2. Or 3.3. There’s no reason it can’t. The only reason they seem stuck at 3.1 is because Intel sucks. I would have thought that if anyone could force Intel to write functioning drivers, it would be Apple.

It’s definitely frustrating, especially given the number of Intel IPGs showing up in laptops. This article from Anandtech shows a roadmap for the professional aspirations of Intel. While Ivy Bridge’s IGP will support DX11 and its successor DX11.1 (no spec yet), OpenGL versions are stuck at 3.1 and 3.2, respectively. Both have a “+” which may indicate that much more is supported but lack some fundamental core features.

Given the imbalance between GL and DX, it definitely appears that issue is driver development priority. Intel approached us a few months ago about testing their graphics profiler, but when I mentioned that we were using OpenGL, I received a response that they would add OpenGL to the roadmap for the product. As the article points out, the majority of the pro apps on the roadmap are OpenGL-based, which makes the prioritization of APIs puzzling.

Not sure what Apple can do in its own GL library/driver, but I would imagine it’d be very difficult to provide greater GL support than the hardware manufacturer’s own drivers.

As the article points out, the majority of the pro apps on the roadmap are OpenGL-based, which makes the prioritization of APIs puzzling.

Actually, that explains the prioritization. Most shops that use pro-apps are perfectly capable of purchasing an NVIDIA or AMD card. After all, if you can buy a $4000 application (and that’s low-end), what’s the difficulty in affording a real GPU?

Not sure what Apple can do in its own GL library/driver, but I would imagine it’d be very difficult to provide greater GL support than the hardware manufacturer’s own drivers.

They’ve got to be able to do something. It’s not like Apple writes a wrapper over the driver’s OpenGL implementation. Apple treats the OpenGL implementation like Microsoft does D3D: the driver is written relative to an internal API, while Apple/Microsoft provide a layer on top of that which implements the actual API.

The only way this makes sense is if Intel simply refuses to expose the stuff Apple needs for their 3.2 implementation. Which, given how much Intel obviously cares, wouldn’t surprise me in the slightest. It’s clear that, as long as graphics function in some capacity, they consider that A-OK.

Actually, that explains the prioritization. Most shops that use pro-apps are perfectly capable of purchasing an NVIDIA or AMD card. After all, if you can buy a $4000 application (and that’s low-end), what’s the difficulty in affording a real GPU?

That’s certainly the current state - no self-respecting 3D pro would consider an Intel solution for their system (or Nvidia/AMD IGP either). However, I do know of technical directors that would like basic support for Intel graphics so they can do scripting and UI support for a 3D app without doing heavy modeling, on their mobile platform.

Considering that Intel’s new Xeons have P3000 graphics (same hardware as H3000, but ‘pro’ drivers), it would seem that they are at least somewhat interested in pursuing that market segment in the future. With process shrinks on the horizon and good CPU/GPU integration already here, it’s conceivable that Intel hardware could service the lower-end pro space in the future, especially in the mobile space - which is why the slow uptake of OpenGL seems a bit puzzling to me. I don’t have anything against DirectX, but it does limit the platform options to one.

I’d certainly be interested to see if Apple could pull more features out of the IGP than Intel. It would be very telling if this happened; it would seem to confirm the marketing-sheet-checkmark mentality that Intel seems to have regarding graphics.

That’s odd that the ATI cards only support one depth sample. You can’t do deferred MSAA with only color multisampling. My ATI 3870 supports at least 8x texture samples for color and depth.

That’s odd that the ATI cards only support one depth sample.

I wasn’t aware that there was a different setting for the number of color and depth samples. Where are you seeing this?

http://www.opengl.org/registry/specs/ARB/texture_multisample.txt

"(9) Are depth multisample textures supported?

RESOLVED: Some implementations may not support texturing from a
multisample depth/stencil texture. In that case, they would export
MAX_DEPTH_TEXTURE_SAMPLES = 1."

RHD2x00 and RHD3xx0 are gpus with that limitation.
The limitation is that you can’t use texelFetch to retrieve depth at a specific sample. Something that is vital to deferred rendering, so gets avoided by adding an additional multisampled rendertarget to the g-buffer, where you write depth (usually linear).

I have a Radeon HD 3870 in my Windows machines right now, and it supports 8x color and depth texture samples. So either the cards they used in iMacs are slightly different, or it’s a limitation of the driver, not the hardware.

Writing the depth to a color texture isn’t a good idea because you are still using a lower resolution depth buffer to test against.

It’s not that big of a deal because we can always use edge detect AA, though it doesn’t look as good. I was just wondering why this limitation is there, and whether it might be lifted in the future, if the hardware is capable.

Or maybe the 3xx0 line actually supported that, but the initial driver implementation didn’t expose that feature.
Or they simulate that feature by transparently adding an extra render-target.

I have a Radeon HD 3870 in my Windows machines right now, and it supports 8x color and depth texture samples. So either the cards they used in iMacs are slightly different, or it’s a limitation of the driver, not the hardware.

It must be a driver issue. It wouldn’t make much sense for ATI or Nvidia to cripple their cards for OSX. I believe the only difference between a PC and an OSX capable card is that the OSX cards support EFI when posting, and probably have some sort of special ID so that OSX will accept it into its ecosystem.

The other oddity I noticed was that the texture array layers is 256 on AMD and 512 on Nvidia cards - but in AMD’s and Nvidia’s own drivers, they are equal to the max 3D texture size (8192 for AMD and 2048 for Nvidia).

Also, there’s almost certainly a copy/paste type-o in the MAX_GEOMETRY_UNIFORM_BLOCKS for AMD cards - it’s reported as 64, and should be the same as the corresponding vertex and fragment shader values, 14.

Writing the depth to a color texture isn’t a good idea because you are still using a lower resolution depth buffer to test against.

It certainly won’t help resolve depth fighting issues, but it does give you a more accurate world position at longer distances. Granted, there’s only some instances where this might actually matter. In our case, the added precision helped the GL lighting more closely match our software renderer’s results when larger-scale scenes were used, at the cost of promoting an RGB32F target to RGBA32F.