OpenGL 3.2 vs. 3.3?

What are the main differences between OpenGL 3.2 and 3.3?

The OpenGL specification has a whole section on what changed between each (recent) version.

Also, how do you define “main differences?”

In terms of actual functionality (doing something that wasn’t possible on 3.2), the most substantive is dual-source blending. Maybe instance arrays. In terms of how you use OpenGL however, I’d say the biggest change is explicit attribute locations. I would have also said sampler objects, but ATI hasn’t bothered to implement it correctly yet, so I haven’t been able to use it.

So all the attribute location stuff in GLSL 3.3 does not work the same way in OpenGL 3.2?

I’m asking because it looks like Apple is only going to support OpenGL 3.2 in OSX Lion, and I am using 3.3 as my main target.

So all the attribute location stuff in GLSL 3.3 does not work the same way in OpenGL 3.2?

Yes.

I’m asking because it looks like Apple is only going to support OpenGL 3.2 in OSX Lion, and I am using 3.3 as my main target.

Nobody ever said that Apple was blessed with a great amount of intellect. I suppose there’s a chance they’ll support ARB_explicit_attrib_location. But honestly, there’s no reason to support 3.2 instead of 3.3, since all 3.2-capable hardware can do 3.3.

Isn’t the main issue that Apple has to implement software paths for hardware that is incapable of full OpenGL 3 support (like older Intel chipsets). Mesa has yet to include OpenGL 3.0 support, so either it isn’t a trivial task or their developers are lazy.

What would be the point of providing software emulation of GL 3 for non-capable Intel parts. That doesn’t make any sense.

No, they shouldn’t.

Hasn’t this always been the case with OpenGL implementations? Whether it is texture formats that aren’t natively supported by the hardware, or GLSL features (such as if-statements on Shader Model 2.0 hardware), the only indication that a feature wasn’t natively supported was when it ran slower.

I’m referring to the Intel parts that have D3D10 drivers, e.g. Intel HD Graphics, but are limited to OpenGL 2.1 support on Windows. Theoretically, it should be possible for these parts to support OpenGL 3, but Intel has opted to limit them to OpenGL 2.1.

Hasn’t this always been the case with OpenGL implementations?

In this case, there’s no need to. Just say that non-GL 3.0 hardware can’t run GL 3.0. It’s really that simple. You don’t see GeForce 6000’s providing GL 3.0 support, because they can’t.

Apple is under no requirement to provide GL 3.0 on hardware that simply can’t do it.

As I said, I was referring to cards that have D3D10 capability, but not GL3 drivers from their manufacturer.

Take a look at http://developer.apple.com/graphicsimaging/opengl/capabilities/, you will notice that some of the hardware in that matrix only does GL 1.4, so I don’t think that software emulation for GL3 features for non-GL3 capable parts is an issue.

What is frightening is that Apple’s new MacBooks has a GL 4 capable part but Apple is still (as of this very moment) still only supporting up to GL2.1. Makes me wonder is a dark way :whistle:

Though, you should be able to put Ubuntu (or another Linux distribution) on parallel with Mac OS-X (likely need boot camp or such so get the dual boot to work out well), once there, then yo can get the ATI (or NVIDIA) binary drivers and unlock the hardware for GL programming (though then no X-code).

Do you notice that Apple includes a software renderer in that matrix?

Seems like the “software renderer” is something that you select during context creation.
Furthermore, the “software renderer” is probably just for testing and comparing the output with a hardware renderer.

If you were to insert some old card such as a Geforce 2 (if it was possible), the Mac would expose the Geforce 2 as a GL 1.2 device and the software renderer as a GL 3.2 device.

Similar to Direct3D’s Reference device. However, it brings home the point that Apple can’t simply support the latest version of OpenGL without an equivalent software renderer.

Similar to Direct3D’s Reference device. However, it brings home the point that Apple can’t simply support the latest version of OpenGL without an equivalent software renderer.

No, that there are hardware devices that only support GL1.4 means it is ok. But in all brutal honesty, only a mad man would ship an application that uses the software renderer.

Additionally, I highly doubt that the idea of using the software implementation to test it against a hardware implementation for the simple fact that a software implementation would have just different bugs/issues as a hardware renderer.

Though, if I remember correctly, under MS-Windows, there are a set of registry keys to give NVIDIA pre-GL3 cards GL3 features via software emulation… I stress though this is I think not I know.

http://developer.nvidia.com/object/nvemulate.html

Works as advertised on GF6100 & GF7600, and even 8x00+

For whatever reasons Apple included a software renderer for OpenGL 2.1 in Leopard will be the same reasons for them to include a software renderer for OpenGL 3.2 in Lion.

However, it brings home the point that Apple can’t simply support the latest version of OpenGL without an equivalent software renderer.

Yes. But that doesn’t change the facts that:

1: OpenGL 3.3 has been out for a year, and Apple sits on the ARB. They’ve known the scope of GL 3.3 for longer than we have.

2: The differences between 3.2 and 3.3 aren’t that big. Certainly nothing like the differences between, say, 2.1 and 3.2, which is they’re having to do now.

3: There’s no rule that says that their software renderer absolutely must support the maximum GL version they allow.

So the “needs a software renderer” thing is not really an excuse.

  1. Intel sits on the ARB, and their GL version for Sandy Bridge is 3.0. In another thread, there is a complaint that AMD’s OpenGL 3.3 support is buggy/incomplete, even though their driver advertises 4.1 support.

  2. It is easy to type “the differences between 3.2 and 3.3 aren’t that big”, but we aren’t writing the implementations. I’m sure you would agree that it’s better to have a verison 3.2 that is fully functional, than a version 3.3 that is buggy.

  3. I suspect that is the reason why Apple’s current OpenGL implementation is still version 2.1, regardless of using newer hardware. Given that OpenGL is a system component on Mac’s (like DX on Windows), they have to ensure that the implementation works as expected (i.e. write the software implementation themselves), rather than outsource the work to IHVs, who will tailor the implementation to their hardware.

We can only speculate what factors Apple uses to determine which version of OpenGL goes into their Operating System. It would have been great if they included 3.3, and it may appear in a future update of Lion, but I suspect that even if they did include version 3.3 at launch, the current argument could be used to state that they should have included version 4.0 …

Intel sits on the ARB, and their GL version for Sandy Bridge is 3.0. In another thread, there is a complaint that AMD’s OpenGL 3.3 support is buggy/incomplete, even though their driver advertises 4.1 support.

The failure of others to properly do their job does not absolve Apple from failing to properly do theirs.

It is easy to type “the differences between 3.2 and 3.3 aren’t that big”, but we aren’t writing the implementations. I’m sure you would agree that it’s better to have a verison 3.2 that is fully functional, than a version 3.3 that is buggy.

1: See above.

2: We are talking about a software implementation, which I personally would never use. So I can’t exactly say that I would much care.

Given that OpenGL is a system component on Mac’s (like DX on Windows), they have to ensure that the implementation works as expected (i.e. write the software implementation themselves), rather than outsource the work to IHVs, who will tailor the implementation to their hardware.

And yet, Microsoft of all people has somehow managed to not only have a D3D 10 implementation (approximately equal to GL 3.3), but also a D3D 11 implementation (~GL 4.1).

but I suspect that even if they did include version 3.3 at launch, the current argument could be used to state that they should have included version 4.0 …

Except that one of the arguments used was specifically that 3.2 isn’t that different from 3.3 (which is true). 4.0 has pretty substantial differences from 3.3. It introduces 2 brand new shader stages, among various other things.

I would also point out that the 2.1 + extensions that Apple supports is already functionally equivalent to 3.2. Sure, UBOs are much better than bindable-uniform, and the core Geometry shader spec is much nicer to use than the extension version, but all of the actual functionality is there one way or another. So implementing 3.2 really is offering no actual functionality to users; just a nicer API.

So it’s not exactly stressful on their development department to simply change the APIs for accessing functionality, is it? Sure, the bindable uniform to UBO change will take some coding, but far less than if they were implementing UBOs from scratch. And so on.

I don’t know why you’re trying to make excuses for Apple’s behavior here. We all know why Apple hasn’t updated OpenGL recently: iOS. MacOSX isn’t a priority for them anymore.

Microsoft sets the standard for 3D graphics on the desktop/laptop. Once you set the standard (D3D10/11), copycats (OGL 3.X/4.X) will have to wait months, if not years, to catch up in terms of implementation quality. Look at the iPad vs its competition.

We all know why Apple hasn’t updated OpenGL recently: iOS. MacOSX isn’t a priority for them anymore.

Money is a motivating factor for any venture, and most commercial OpenGL developers are probably targeting the iOS platform.

I was hoping for an announcement of an OpenGL ES 3.X spec at this year’s GDC, but doesn’t appear to be on anyone’s radar. Maybe we have to wait for Microsoft to bring such D3D10-features to the next version of Windows Phone before Khronos makes a move.