What is happening?

David Blythe was the principal engineer with the advanced graphics software group at SGI, one person that has been creating courses about OpenGL, has been a representative on the ARB, and recently one of the fathers of OpenGL ES (and the OpenGL ES Specification Editor). I read last year that David Blythe is now part of the Direct3D developer team. In fact he has given some presentations about Direct3D ("David Blythe" Microsoft - Google Search)

But now, I have read that Kurt Akeley, co-founder of SGI, one of the fathers of OpenGL, the person who signed the OpenGL 2.0 specification (with Mark Segal), that has contributed to extensions like frame_buffer_object or vertex_buffer_object, the person who had a Lotus with license plates that read “OpenGL” (About The Khronos Group - The Khronos Group Inc) is now also at Microsoft.

What is happening? Everybody turning to the dark side?

Who will be next? Mark Kilgard? Jon Leech?

How affect this to the future of OpenGL?

money. the root of all evil.

How affect this to the future of OpenGL?
one word, PS3.
personally ild be more worried about the future of d3d

How affect this to the future of OpenGL?[/QB]
Probably scare some people like you and that’s about it.

Akeley designed OpenGL 1.0 with Segal, the origins of OpenGL 2.0 are more complex but he contributed with many others. AFAIK he’s working for Microsoft in China.

The last thing Blythe played a central role in before Microsoft was OpenGL|ES his move to Microsoft may be related to embeded efforts there but he could do a lot of things.

This is old news, the sky is not falling. Asking who’s next is kinda silly. Blinn and Kajiya have worked at Microsoft as researchers for years, scores of other people don’t (yet :eek: ). :rolleyes:

Originally posted by Zak McKrakem:
Mark Kilgard?
OpenGL has a specular, I mean bright, future.

NVIDIA has made a rock-solid commitment to OpenGL and I’ve been amazingly fortunate to participate in that commitment. NVIDIA’s OpenGL driver is the most functional, best performing, and most stable implementation of OpenGL available. Given the kind of sustained commitment NVIDIA has given OpenGL, I’m quite happy and proud to call NVIDIA my employer. What’s been accomplished is really a testament to the passion of hundreds of top-notch software engineers, hardware designers, and 3D architects here at NVIDIA. That passion permeates all aspects of NVIDIA’s product development.

Wow, I think about what RIVA 128 was seven years ago. No 32-bit color, no 32-bit depth/stencil, no 32-bit RGBA8 textures, everything was 16-bit, no sub-pixel positioning, only a small subset of OpenGL’s blend modes supported, the most basic texturing was fancy back then. But RIVA 128 was a great chip for its time with a full OpenGL Installable Client Driver (ICD) for Windows.

Now think about GeForce 6800 today. Vertex programs can access non-power-of-two floating-point textures! Fragment programs can branch on data-dependent values computed within your shader in full 32-bit floating-point! If one 6800 isn’t fast enough for you, put two in your SLI system. I’ve watched it all but still can’t help but be impressed.

In those seven years, OpenGL transformed itself from a hardware-amenable graphics state machine (with a fair amount of quirks–think color material, feedback, evaluators) into a first-class platform for programmable graphics. Yet, there’s still 100% complete API compatibility going all the way back to OpenGL 1.0, nearly fifteen years. And cross-platform support too! Think about it: While native window system APIs are frustratingly different across different systems, OpenGL rendering code can recompile and run natively and fully hardware accelerated across Windows, Mac, and Linux systems. Fully porting a sophisticated graphics user interface between Mac, Windows, and Linux systems can take several man-years, but OpenGL rendering code just recompiles (possible lesson: render your GUI in OpenGL!). State-of-the-art 3D programmable floating-point shading is more portable than trying to create a scroll bar! Think about this: when it comes to API calls, glBegin and glVertex3f are as ubiquitous today as malloc and strcpy.

And NVDIA provides access to the full GeForce 6 Series 3D feature set through OpenGL (even functionality not exposed in the other 3D API such as hardware accelerated accumulation buffers, border texels, depth clamp, depth bounds test, multisample coverage control, and stencil clear tags).

Also rather than force developers into an OpenGL-centric high-level language, NVIDIA has given you the option to pick among the OpenGL-centric OpenGL Shading Language standard, various assembly representations that expose the FULL underlying programmable hardware functionality, or Cg that allows OpenGL-based content creation applications to produce shader-based 3D content for a high-level shading language that’s not tied to OpenGL.

If there was one thing about OpenGL that I’ve been frustrated by, it is the short-sighted decision to hide programmable shading behind a single high-level hardware shading language that is overly tied into OpenGL to the point that an optimizing compiler is wedged into the driver. Yes, there are ARB-standardized assembly extensions, but NVIDIA is the only vendor exposing the latest GPU functionality in both high-level and assembly forms.

Face it: Shader programs are part-and-parcel of modern 3D content today. To render contemporary 3D content, you need geometry, textures, and… shaders. You wouldn’t base your 3D application around an image file format or 3D model format that could render ONLY with OpenGL. Anyone for the OpenGL Image Format or OpenGL Model Format. Instead, you pick API-neutral formats (TGA, JPG, whatever) for content. But shaders written in the OpenGL Shading Language aren’t neutral and shackle themselves to OpenGL.

Hey, what’s wrong with shackling content to OpenGL if people (in this forum at least) love OpenGL? It’s not just being shackled to OpenGL. It’s being shakled to a particular weight-class of OpenGL found in PCs today when that weight-class is very likely to be unsuited to exciting future 3D consumer devices.

I’m all for programmer-productive authoring of shaders in high-level shading languages (hey, I even co-authored a book about just that), but do we need to jam a compiler into the driver? I think it was a bad move (even if I did wind up reluctantly implementing it in NVIDIA’s OpenGL driver). Adding OpenGL Shading Language support bloated NVIDIA’s OpenGL driver by something shy of a megabyte. (Don’t be too surprised; that’s about the size of any good optimizing compiler implementation these days, plus you gotta thow in the standard library.)

Think about what happens if we add some new language feature to the OpenGL Shading Language. Pick your favorite C++ or Java feature. Say a feature to make shader writing more object-oriented. For example, Cg has a wonderful “interface” construct similar to what Java provides to make shader design more modular and abstract.

So you happily and productively embrace this new language feature. But wait, there’s a catch. Anyone wanting to use your GLSL shaders written with the new language feature must download the right new driver and reboot their machine (or maybe even rebuild their kernel for Linux users).

That’s a pretty big end-user burden just so you, the programmer, could use a fancy new shading language feature. And if vendor XYZ is late to release a driver with your new favorite new language feature supported, your shader just doesn’t work in the meantime.

Direct3D out-software engineered the ARB when it came to engineering a programmable shading language. Direct3D builds its shading language implementation into a redistributable library that you can package with your application. The library targets a (tokenized) assembly interface. So a new language feature (or compiler bug fix) can be utilized without necessitating end-user driver upgrades (and reboots) by just using the latest compiler library.

Cg makes this same wise engineering choice. There have been four Cg releases so far. New language features get added in without much fuss. Plus the language itself is API-neutral so your Cg shader can be used to render with the other API with few or no problems.

Still if you don’t like either GLSL or Cg, feel free to target our assembly interfaces that expose NVIDIA’s FULL programmable functionality.

I love what Michael McCool and his students at the University of Waterloo have done with their Sh library. Their meta-shading paradigm for shader construction is the kind of novel approach I want to encourage. Having a fully-functional assembly interface facilitates this.

Have I irked anyone? I hope not. If you love the OpenGL Shading Lanuage, hey, NVIDIA supports it quite well, including vertex textures and data-dependent branching. That’s stuff no one else hardware accelerates today.

Still if you want other options for programmable shading at the assembly level or with an API-neutral shading language that allows you to easily move your shader assets between OpenGL and the other API, NVIDIA has you covered too. You pick what suites your needs.

Honestly, it’s a great time to be in the midst of 3D graphics hardware technology.

I’m confident OpenGL will stay current the state-of-the-art for 3D graphics performance and functionality.

I’ve got my complaints however. For a few big OpenGL design decisions, I’ve been unhappy with the outcome. Bluntly, it’s been disadvantegous for OpenGL. But you take the good with the bad and win the battles you can. Would I offer advice for an OpenGL programme wanting to know what syntax they should use for writing shaders? At one level, I’d say pick what best meets your needs. You can be confident NVIDIA is going to support whatever choice you make, even if you decide to use the other API. But if you pressed me, I’d say author shaders for OpenGL with something API-neutral so you can reuse your shaders no matter what rendering interface you use. So I’d recommend Cg. Nobody should be surprised by that.

And yes, OpenGL has a specular future.

I hope this helps.

  • Mark

Mark, it’s nice to see that some of you keep in touch with the “lower” community, but yet it’s hard to read that such good thoughts are blended with such politics one. Is the aim of this post really to say that OpenGL’s gonna shine ?

NVIDIA’s dogma 1 : “don’t compete with your customers”
eg. don’t build boards, better sell the chips

NVIDIA’s dogma 2 : “don’t compete with Microsoft orelse they will destroy you”
eg. priority is to implement latest DirectX, because 1/ the market asks for this and 2/ it will please MS
eg. never do more that what MS specified, it would be a waste of time & money

Obviously the software engineering team is clearly separated from the marketing one. But which of these two dictates the company’s strategy ?

Even a 2 hours writing mail won’t make me think that the “good” people at nVidia’s can really be strong enough to compete with the “bad” ones. The 2 dogmas came from one of those “bad” guys mouth.

SeskaPeel.

Mark,

I agree with you about ‘your feeling’ about OpenGL.

I used Cg but I removed it against GLSL mainly because the resulting code didn’t work properly in ATI cards, and because the lack of support: my ‘assigned’ developer relations at that time finished a two months ‘please, wait’ emails with one ‘we are too busy at the moment, I will try to answer you in a near future’ (a future that never came). It was a problem of invariance not working on GFFX cards.

Seeing all the problems with GLSL on non-NVIDIA boards, I agree with you that maybe an intermediate pseudo-assembly could be good. But some problems arise: who will be the implementer of the intermediate compiler? The ARB? How many time will then be necessary to have the first version (that will not make happy to any of the ARB members, including NVIDIA)?
In the case of D3D it is clear that it will MS. But in OpenGL…

How many time will it take for every new version of the assembly language extensions (arb_fragment_program2/arb_vertex_program2)? And, will them be really useful or will them become a least common denominator between ATI, NVIDIA and 3DLabs?
For example, Will it include instructions like NRM? Or should we imagine that the ‘unified compiler’ will identify the group of instructions and will convert then to a NRM instruction?
For instance, there are similar problems (or even more) using the assembly extensions in ATI cards than using GLSL. You have to take care about instruction order, about texture indirections, …

It is good to hear from you. I would like to see a conference/tutorial from you at GDC, like some years ago. I think you are a good communicator.

Originally posted by Zak McKrakem:
Who will be next? Mark Kilgard? Jon Leech?
or me? please?

Originally posted by Mark Kilgard:

NVIDIA has made a rock-solid commitment to OpenGL and I’ve been amazingly fortunate to participate in that commitment. NVIDIA’s OpenGL driver is the most functional, best performing, and most stable implementation of OpenGL available. Given the kind of sustained commitment NVIDIA has given OpenGL, I’m quite happy and proud to call NVIDIA my employer. What’s been accomplished is really a testament to the passion of hundreds of top-notch software engineers, hardware designers, and 3D architects here at NVIDIA. That passion permeates all aspects of NVIDIA’s product development.


Well, it is ‘true’ for driver development (it has never take so long for NVIDIA as the glsl implementation, even ATI had it before. And what about 2.0, NVIDIA has no currently officially release a OpenGL 2.0 driver, nor in the consumer space neither in the developer space). Can you imagine this situation if DX 10.0 will be released today…? Tomorrow NVIDIA would announce that it has a DX10 driver.

And what about FXComposer just being D3D? Even ATI has Rendermonkey with GLSL support and working pretty well.
And what about NVPerfHUD? It was ‘announced’ in version 1.0 that it will support OpenGL. Now it is version 3.0 and no sign of OpenGL.

I agree that NVIDIA has the best OpenGL support in your drivers. And it is indeed an advantage for your company. What do you think about every developer, except Humus that is currently working for ATI :wink: in these forums recommended the GF6800 family? What do you think in every OpenGL released game recommending a NVIDIA card over your competitors?
Every good work has its reward. But I think that it has been your personal work as the NVIDIA OpenGL driver ‘boss’.
As said here, I think the not only your consumer marketing people is committed to D3D but your developer marketing people. This is my opinion.

NVIDIA’s dogma 2 : “don’t compete with Microsoft orelse they will destroy you”
eg. priority is to implement latest DirectX, because 1/ the market asks for this and 2/ it will please MS
eg. never do more that what MS specified, it would be a waste of time & money
I hate to point this out, but the later is patently untrue. nVidia exposed register combiners on the GeForce 1. Not through D3D, which at the time didn’t even consider such a thing to be possible, but through OpenGL. Even after D3D 8, register combiners were more capable than what was exposed through PS 1.0 and 1.1. The GeForce FX, for all its shortcomings, blew past the limits of PS2.0 and VS2.0, in terms of number of avaiable constants and uniforms. Microsoft had to commission a new release of D3D 9 just to expose features that only nVidia to this day supports in the GeForce 6 line.

So nVidia is hardly Microsoft’s pawn. Indeed, if anything, it’s ATi who implements D3D specifications directly into their hardware, not nVidia. After all, with no new version of D3D, did ATi even consider adding new features to the R420 line?

@Zak:

“What do you think about every developer, except Humus that is currently working for ATI in these forums recommended the GF6800 family?”
Their dev rel team is doing excellent work. Excellent because technically efficient, free, nice people to chat with, and fast answering.
This is no surprise that a lot of the community appreciate this.

What do you think in every OpenGL released game recommending a NVIDIA card over your competitors?
There is two points : what opengl game are you talking about ? Doom 3 or q3 and derivatives … marketing ?
Second point : the way it’s meant to be played … marketing ?

@Korval:
The rationale behind those “new features” is simple. To build a fully compliant DirectX 9 chip, they need a pool of solid features. Once they have it, other added can come for free as they were part of the pool.

Antoher example: at this point, there’s no difficulty in opening a programmable blending stage. Why is it not done yet ?
Again, D3D is based for PC architectures, and nVidia is clearly trying to take over all the chipset market, being graphical or not, PC or not. Having such constraints (D3D specs) is a pain for them, they’d better go for a proprietary API, that would be even easier to port on console market. But this gets in conflict with dogma #2, and Microsoft will consider them as enemies. I totally trust the “scientist” that told me they’d never do it in a near future, meaning it would certainly never happen.

Everything that is not opensourced, is Microsoft’s pawn … Can you picture a rock solid commercialproof 3D engine opensourced nowadays ?

Anyway, http://games.slashdot.org/article.pl?sid=05/03/10/214212

SeskaPeel.

And what about FXComposer just being D3D? Even ATI has Rendermonkey with GLSL support and working pretty well.

FX Composer 3.0 is currently in the works which is a complete re-write from C++/MFC to C# and will feature both Direct3D and OpenGL among many other new cool features.

Originally posted by SirKnight:
FX Composer 3.0 is currently in the works which is a complete re-write from C++/MFC to C# and will feature both Direct3D and OpenGL among many other new cool features.
I think you mean FX Composer 2.0 …

Oops, yes I do. Hit the wrong key and didn’t realize it. :eek:

-SirKnight

Yes, Linux support is good but what about the SDK, all the tools etc.? This is really lacking.

Their dev rel team is doing excellent work. Excellent because technically efficient, free, nice people to chat with, and fast answering.
This is no surprise that a lot of the community appreciate this.
I’m sure that’s one reason. But what of the others? Like actually caring about the quality of their drivers (ATi releases a beta driver every month, while nVidia takes their time to actually test and fix bugs). Or maybe it’s because they tend to expose more features through GL than ATi does.

The rationale behind those “new features” is simple. To build a fully compliant DirectX 9 chip, they need a pool of solid features. Once they have it, other added can come for free as they were part of the pool.
Nonsense. Even the FX was far more than DX9 compliant. It blew past the DX9 requirements, instruction and uniform-count-wise, while ATi implemented the bare minimum.

There is not one feature in either the R300 or the R420 that is not the bare minimum that DX requires. You can walk the list of features for the card and for DX9, and you can see that it does exactly and only what DX9 requires. Meanwhile, every nVidia card has offerred more through OpenGL than D3D.

No nVidia or ATi card has gone against DX. This is a fact, and it is expected, since DirectX is the predominant 3D gaming API. However, no nVidia card has ever just done the bare minimum either. They have always provided more hardware than DX requires, from the TNT2 through the 6800. ATi has not.

Your Dogma #2 is followed far more by ATi than nVidia. And I defy you to provide one, one counter-example where an nVidia card provided only the bare minimum DX functionality. Past the TNT2 model.

Oh, let’s not forget which card it is that forced 2 API’s to define that texture-indirection nonsense into their fragment program specifications.

at this point, there’s no difficulty in opening a programmable blending stage. Why is it not done yet ?
Acceptance of this statment involves the answer to this question: by “programmable blending state” do you mean a 3rd kind of program that runs after the fragment program, or being able to read the framebuffer in the fragment program? If it is the latter, I refuse to accept that there is “no difficulty” when scores of engineers are telling me otherwise.

If you mean the former, what are you suggesting? That Microsoft decides when a programmable region opens up? Last time I checked, my first fragment programs were NV_RC, a good year or two before we first got D3D shaders. Microsoft didn’t ask nVidia to make RC’s; nVidia did it on their own.

More importantly, God knows I don’t see ATi stepping up to provide programmable blend stages either. So, while you have an argument about Microsoft’s decisions about DirectX’s API controlling what gets built into graphics cards to a degree, it is most definately not limitted to nVidia.

Actually, this supposition is the only thing that I can find that provides any logic to your prior arguments. It seems that perhaps you are suggesting that nVidia, as a dominant force in the graphics industry, should challenge Microsoft more often and is abbrogating their responsibilites to the consumer by not ignoring the Direct3D API? That they should have created a propriatery API that others now implement that makes D3D and OpenGL worthless (or that they just focus on GL without respect to D3D)? That such a thing would be benifitial to the consumer at large and that nVidia has chosen not to do so because it would anger Microsoft?

Well, you’re right to some degree. nVidia choose not to commit seppuku by ignoring the advances of D3D, just as ATi made the same choice. 3DFx did, and look where they are now. With ATi on their heels, nVidia is in no position to annoy legions of D3D developers and create 3 competing API’s.

More importantly, who cares? If programmable blending is an important feature, they’ll implement it sooner or later. Either D3D will expose it or they will through the DX10 extension mechanism. So you don’t get programmable blending at the first moment you could have had it. Boo hoo. We’ll get it soon enough.

On a personal note, programmable blending isn’t that interesting to me. Useful? Certainly. But I’d prefer having a programmable primitive processor that can walk memory and feed attributes to the vertex shader. And I think that’s where we’ll go before getting programmable blending.

Everything that is not opensourced, is Microsoft’s pawn … Can you picture a rock solid commercialproof 3D engine opensourced nowadays ?
Huh? You mean like Torque?

Again, D3D is based for PC architectures, and nVidia is clearly trying to take over all the chipset market, being graphical or not, PC or not.
What’s your point?
ATI has created chipsets for all sorts of platforms just like NV.

Having such constraints (D3D specs) is a pain for them, they’d better go for a proprietary API, that would be even easier to port on console market. But this gets in conflict with dogma #2, and Microsoft will consider them as enemies. I totally trust the “scientist” that told me they’d never do it in a near future, meaning it would certainly never happen.
I see Nvidia as an innovator in the graphics industry since it’s beginnings. However, someone said that the concept of RCs originates from SGI and Nvidia licensed it. Even if that is the case, they recognized it and implemented and released nice demoes for GL.

I don’t agree with your dogma #2 at all. Some NV features are not exposed in D3D.

What was this conversation you had with the scientist?

Hi Mark,

I’d like to thank you for all of your generous contributions to the graphics community. And thanks to everyone at NVIDIA for their steadfast support of OpenGL and continuing contributions to this forum.

Fully porting a sophisticated graphics user interface between Mac, Windows, and Linux systems can take several man-years, but OpenGL rendering code just recompiles (possible lesson: render your GUI in OpenGL!). State-of-the-art 3D programmable floating-point shading is more portable than trying to create a scroll bar!
I couldn’t agree more with this sentiment. Inspired by Blender and Eric Lengyel’s C4 engine, I’ve begun work on an in-game editor and have not looked back. What a profound relief it is to be done with Windows, insofar as the editor goes, anyway. And to know that not only will my game be portable, but the editor as well? Well, it’s really quite a thrill. I too believe this is one of OpenGL’s greatest strenghts, and could well play a role in its eventual rise and domination in the PC games market :slight_smile:

How affect this to the future of OpenGL?
apendum(sp?)

nice to finally read someone from sony has confirmed that the ps3 will use a dirivative of opengl

“Cell graphics will rely on a variation of the standard OpenGL library already widely used for PC games. Sony and software consortium the Khronos Group are developing Open GL/ES, a dialect of OpenGL optimized for interactive content”

http://news.com.com/PlayStation+3+to+be+…html?tag=cd.top

also looks like cg has gotten the nod.
the king is dead, long live the king

Korval: On a personal note, programmable blending isn’t that interesting to me. Useful? Certainly. But I’d prefer having a programmable primitive processor that can walk memory and feed attributes to the vertex shader. And I think that’s where we’ll go before getting programmable blending.
thats funny, i actually posted a wish for this in one thread, then thought i would look like an ass, so i edited it out. anyhow, this is the next major hardware development i would like to see.

as for blending, i would like to see built in per pixel blend sorting. any chance for that? i could describe how i would imagine implimenting that in hardware, but it would just muck up the thread.

finally, this thread raised some questions about Cg versus GLSL for me. last i was writing my own shaders, i was using Cg, but i figure the API has changed by now… so i was planning to change to glsl next time i write a shader. for what its worth glsl was not available when i was using Cg. i’ve read the glsl specs, and am comfortable enough with them… and i figure Cg also supports glsl grammar. but politicly and technicly, can anyone make any recommendations?

lately i’m leaning towards sticking with Cg.