OpenGL 3.2 in OSX 10.7

According to macrumors, OSX 10.7 will have OpenGL 3.2 support, due out sometime this summer. Fingers crossed for a stable, well performing implementation.

WHouhouuuuu! :slight_smile:

It’s odd that they would pick 3.2 instead of 3.3, when all 3.2 capable hardware can also support 3.3.

What’s going to be interesting is what extensions they also support in addition to 3.2. The differences between 3.2 and 3.3 are many; explicit_attrib_locations alone is a pretty big deal for how you write shaders.

Apple is writing there own drivers and I guess that’s they will have time to finalize until Lion release. I don’t think it’s a choice.

If they had choice, it would be OpenGL 4.1

Not sure where macrumors is getting its information from but actual user reports don’t corroborate their story.

Lion OpenGL Support using NVIDIA 9400M 256MB:
https://skitch.com/fredericl/rt7mh/opengl-extensions-viewer

Lion OpenGL Support using NVIDIA 320M:
http://dl.dropbox.com/u/7755/glview-NVIDIA%20GeForce%20320M%20OpenGL%20Engine.xml

Lion OpenGL Support using GeForce 8800 GT:
http://twitter.com/opengl/status/40904330320027648

Lion OpenGL Support using Intel X3100:
http://netkas.org/?p=609#comment-152379

All the above nVidia GPU’s support OpenGL 3.3 on Win/Lin.
And yet clearly they are still stuck with GLSL 1.2 on Lion.

So there doesn’t seem to be complete OpenGL 3.2 or 3.1 or 3.0 support in Lion.
Based on the screenshot in the first URL, it looks like OpenGL support is identical to that in Snow Leopard because that is what I am seeing on my 9400M Snow Leopard Mac.

I think it’s really pathetic. With the stability of Mac drivers they could take over high-end rendering. Although the hardware is capable of doing much more, our engine is going to use a renderer about on a level with Half-Life 2 on OSX. I don’t trust OpenGL 2.1 + extensions to write a deferred renderer, especially when I know the Apple engineers are almost certainly not designing their branched version of OpenGL with anything advanced in mind.

If they want Mac to be the creative / artist computer, they should support modern rendering functionality!

Not sure where macrumors is getting its information from but actual user reports don’t corroborate their story.

It’s possible that Apple added new functions to the Apple-specific MacOSX OpenGL interface to create GL 3.x contexts. Something like WGL_ARB_context_create.

What makes this unlikely is that 3.2 introduced WGL/GLX_ARB_context_create_profile, which changed the wording of context_create. Basically, it allows you to get compatibility specifications without using context_create.

Now, it is “possible” that Apple is only implementing 3.2 core, in which case you would need something like context_create to get a context. But that seems… unlikely.

Not sure where macrumors is getting its information from but actual user reports don’t corroborate their story.

My guess is that Apple’s GL3.2 support isn’t mature enough for inclusion in the developer seeds yet. It’s due out in the summer, so they still have a bit of time yet.

Personally, I’d be happy with GLSL > 1.20, texture buffer objects, uniform buffer objects, primitive restart and multisample textures, in addition to the GL3 extensions Apple already has.

With the stability of Mac drivers they could take over high-end rendering.

Are you being ironic? Because I’ve had to work around over half a dozen OSX driver bugs in the past year (vs 2 apiece from Nvidia and AMD/ATI). Yes, part of that was due to a deferred renderer codepath :slight_smile:

Because I’ve had to work around over half a dozen OSX driver bugs in the past year (vs 2 apiece from Nvidia and AMD/ATI).

Really? I’ve never done MacOSX development, but I always assumed that their OpenGL drivers were pretty stable. I’m curious to know what you had to work around.

Are you being ironic? Because I’ve had to work around over half a dozen OSX driver bugs in the past year (vs 2 apiece from Nvidia and AMD/ATI). Yes, part of that was due to a deferred renderer codepat

Did their drivers alter behavior at all during that entire time? I didn’t say they were compliant. I said they were stable.

I’m curious to know what you had to work around.

I work on a cross-platform CAD app (Windows, Linux, OSX) and have a Mac Pro triple booted between the three (latest OSX version, Ubuntu 9.10 64b, Win7 64b). Starting with 10.6.3, we began encountering a lot of regressions, until 10.6.5. Since OpenGL performance increased substantially between those releases, I’m guessing a lot of the issues were related to optimization work, and things have calmed down a lot since then. Often a regression would affect either Nvidia hardware or AMD hardware, but rarely both.

Most of the issues appeared to be GLSL compiler or FBO related, ranging from crashes that required an application features to be disabled, to minor issues that could be worked around by altering the shaders. Anyway, here are the ones I remember:

  • Writing to more than one buffer in an FBO on Nvidia hardware produced a few random garbage pixels where polygons should have been, and nothing else. A crash would shortly follow a frame or two later. In another case, the entire OS desktop would go into a “flicker fit” where it would refresh properly down to a certain random scanline that changed with each frame, then display black. A reboot was required. At first I thought it was a bad card, but it was reproducible on other systems with Nvidia cards. AMD hardware on OSX was fine, as was Windows/Linux.

  • gl_FrontFacing stopped working on AMD cards and we had to roll our own using GLSL’s frontfacing() function.

  • accessing gl_LightSource in a constant loop didn’t work for awhile on Nvidia, requiring us to unroll it. A OSX version or two later, we had to consolidate the lighting loop into the same fragment shader as main() otherwise some uniforms that were definitely used were not in the active uniform list (the ones near the end of the list alphabetically, but within the max uniform limit).

  • in the August graphics update, on AMD hardware I had to do a glGetTexImage on textures drawn to by an FBO render that weren’t attached to GL_COLOR_ATTACTMENT0 (depth attachment, color attachments 1+), otherwise they contained garbage when accessed by a shader later in the redraw. glFinish() didn’t do the trick either, only reading pixels from the texture fixed the issue.

  • Early last year, uniform arrays (uniform int foo[8]) were suddently reported in the active uniform list as foo[0] rather than foo.

  • In one fragment shader with an ‘if(expr) discard;’ statement, I had to also assign a dummy value to gl_FragColor before the discard otherwise the fragment shader displayed black for all pixels.

We reported these issues to Apple as we tracked them down, and most of these issues were resolved within an update or two, but it was a pretty rocky road for a bit. I began to dread OSX updates :slight_smile: None of these problems affected the same system booted into Linux or Windows using AMD or Nvidia’s drivers.

Did their drivers alter behavior at all during that entire time? I didn’t say they were compliant. I said they were stable.

Because of my experience, I couldn’t tell if you were being straight up with your comment or not. So I asked :slight_smile:

I don’t believe Apple intentionally altered the GL behaviour, but it definitely altered the behaviour of our application. On the bright side, our app is no longer sluggish on OSX, as it was prior to 10.6.4. So, progress :slight_smile:

Some good news on the Mac OpenGL 3.2 Front at least for ATI:

NSOpenGLPixelFormatAttribute attributes = {
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
NSOpenGLPFADoubleBuffer, // double buffered
NSOpenGLPFADepthSize, (NSOpenGLPixelFormatAttribute)16, // 16 bit depth buffer
(NSOpenGLPixelFormatAttribute)nil
};

GL_VERSION = “3.2 ATI-7.0.52”
GL_SHADING_LANGUAGE_VERSION = “1.50”
GL_RENDERER= “ATI Radeon HD 4850 PRO OpenGL Engine”

from x2000gldriver:

ATI Technologies Inc.
2.1 ATI-7.0.52
3.2 ATI-7.0.52

http://netkas.org/?p=642

I’m hoping there is a NSOpenGLProfileVersion3_2Compatibility as well, or similar attribute pair to request a compatibility profile.

Apple is silent about compatibility mode.

But you can still use 2.1.

I’m pretty doubtful that we’ll ever see a compatibility context from Apple. They prefer to keep developers current, it seems (64b apps were unavailable using the 32b-only Carbon API; if you needed 64b, you had to switch to Cocoa). I was hoping I could avoid a large sweep to update all the trivial rendering codepaths that used removed GL features, since all the other platforms we support have the compatibility extension.

FYI I got word back from the driver dev team, and apparently I am not sharing textures between contexts correctly. So AFAIK this will work, although I haven’t implemented the fix yet.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.