PDA

View Full Version : Dx11 catchup



devsh
01-05-2010, 07:46 AM
Well, all I have to say is that the next release of openGL (and GLSL 1.6) must catch up with current dx11 state, or we will experience (maybe) something worse than when dx7 came out :mad:

-Tesselate Shaders (the two programmable + FF)
OR
-Parallel triangle output geometry shaders (and ofc 16k GL_MAX_GEOMETRY_OUTPUT_COMPONENTS)
// can you imagine i cant output 32 triangles with 2 tcoords?
// If I was rendering a water surface with a texcoord, tangentMatrix, wave direction, screen space vert position, reflection space vert position, Eye normal (I messed up the name, anyway its the E part of the specular)
ON SOME HARDWARE I WOULDN'T BE ABLE TO OUTPUT ANY ADDITIONAL TRIANGLE AT ALL

-Multithreaded rendering
-Object Oriented GLSL (supreme to UberShader)

It's not so hard to implement the features that the other API already has, because there is already underlying hardware (fermi and HD) and you need to get your pointers/assembly right.

Seriously next time we need to be ahead of dx12 and have features they dont, which will visually attract gamers.
But this is not likely to happen because the vendors need to put it in their hardware, and who listens to openGL when DX is EVERYWHERE.

P.S. I don't know how hard the tessellate shader is going to be, but if we could have at least a tesselate_displacement and a phong_tesselation extension (like AMD tesselate) it would be great.

devsh
01-05-2010, 08:44 AM
also i would like to say that the layout() and extension about separate shaders, do not provide the functionality SM 5.0 does

Dark Photon
01-05-2010, 12:42 PM
Well, all I have to say is that the next release of openGL (and GLSL 1.6) must catch up with current dx11 state, or we will experience (maybe) something worse than when dx7 came out
Let's see. When DX10 came out (even before!), OpenGL had the DX10 goodies first. Why? NVidia had a card on the market, and NVidia focuses on OpenGL support.

What's different this time? Hmmm....

I'm expecting GL support by GDC, if not for GL (EXT/core) then definitely for NV (which hopefully (http://techblips.dailyradar.com/story/amd-gpu-shortage-causing-pc-vendors-to-delay-products/) will be out by then).

zeoverlord
01-05-2010, 02:30 PM
Seriously next time we need to be ahead of dx12 and have features they dont, which will visually attract gamers.

What first needs to be done is to sort out the ins and outs of what openGL should be like in a few years, like texturing, framebuffers and stuff like that, in my opinion i think it needs to be unified and generalized a bit more with they way stuff like VBOs are being done now.

Dx12 is years away and if you consider that DX10 is still not really a requirement even today it's not a big problem,e.
So relax, tessellation will come, probably next GDC as an ext and then go core somewhere along in gl 3.5 or so, which is not that far into the future.

And i don't think openGL should get creative until at least OpenGL 4.

pjmlp
01-05-2010, 02:42 PM
Let's see. When DX10 came out (even before!), OpenGL had the DX10 goodies first. Why? NVidia had a card on the market, and NVidia focuses on OpenGL support.

Allow me to disagree. It is true that they are supporting OpenGL on the driver level, but it is a diferent story when you look at their tools.

FXComposer does not support GLSL. They only plan to add basic OpenCL support to Nexus.

On their tools they only give proper support to their technologies (Cg, CUDA) and DirectX.

They are on their own right to support whatever technologies they feel it makes business sense, but for me as an OpenGL fan, it feels not quite right.

Alfonse Reinheart
01-05-2010, 04:01 PM
Well, all I have to say is that the next release of openGL (and GLSL 1.6) must catch up with current dx11 state, or we will experience (maybe) something worse than when dx7 came out

Please. I don't know what you're talking about with DX7, but DX11's features are pretty thin. There are basically 3 features of actual note: tessellation, multithreading, and compute.

Compute is not something OpenGL is going to ever handle. It's being taken care of by OpenCL.

Tessellation of some form will be available eventually. Multithreading will be harder, simply due to the complexity of specifying what the feature means.

In short, there will be another revision of OpenGL in the near-ish future. It will have appropriate extensions and core features for this stuff. Stop worrying about it.

Personally, I don't care about DX11 features. I'm more concerned about things that will be useful on DX10 and 9 hardware: shader separation, binary shaders, sampler state separation from texture objects, and so on. API cleanup work that OpenGL has long been needing.


-Object Oriented GLSL (supreme to UberShader)

Um, no. There is absolutely no point in adding classes and polymorphism to GLSL. Until you have pointers and recursion (the main things missing from GLSL that are present in C), it is totally useless to add classes.


When DX10 came out (even before!), OpenGL had the DX10 goodies first. Why? NVidia had a card on the market, and NVidia focuses on OpenGL support.

Again, NVIDIA != OpenGL. NVIDIA extension support != OpenGL support.

I'm glad that you operate in an environment where you can dictate what hardware your users use. That's not everyone. That's not most people. That's not even the majority. And therefore, what NVIDIA does is not the same as what OpenGL does.

Oh, and it's funny: DX11 hardware is available now. So where is that OpenGL "support" you're talking about. Oh that's right: NVIDIA is 6 months behind ATI. So any OpenGL "support" is behind the hardware.


FXComposer does not support GLSL. They only plan to add basic OpenCL support to Nexus.

Of course not. GLSL is supported by ATI too. You can't go around making tools that work on competitor's hardware. Same with OpenCL. No, you use your toolchain to provide vendor lock-in. That's how businesses work.

crankygoo
01-05-2010, 04:11 PM
most of games in 2009 was issued under dx9 ).
dx9 seems to have longest life among all dx releases :)

and i see dx has much more bigger support from microsoft & nvidia.
at least sdks & docs have up to 10 times bigger size.
most opengl's examples & tutors are outdated.

now nexus comes for vs 2008 with hlsl support first.

Dark Photon
01-05-2010, 05:10 PM
When DX10 came out (even before!), OpenGL had the DX10 goodies first. Why? NVidia had a card on the market, and NVidia focuses on OpenGL support.

What's different this time? Hmmm....

Again, NVIDIA != OpenGL. NVIDIA extension support != OpenGL support.


Alfonse, get off the caffeine already. You take things way too seriously.

OpenGL support via any means, core, extension or otherwise, is OpenGL support (not OpenGL "core" support, OpenGL "support").

Any new functionality may not support all cards back to Radeon 9700 or GeForce FX (or even more than one vendor's GPUs for that matter), but it's still OpenGL support if you can get to it via OpenGL. That's one of GL's strengths.

Even if we had core/ARB/EXT support for tesselation now, it'd be basically the same as having only ATI extension support now, because nobody can hack HW tessellation at present but ATI. ATI was first to market and I applaud them. So where's the OpenGL support, ATI?

Which was my "whole point"!, and apparently flew completely over your head. There is no GL support, via vendor extension or otherwise right now, because NVidia doesn't have a card on the table, and ATI isn't yet devoting the same resources to GL. ATI, please take this as a gentle nudge. We'd love to use your products (and we use loads of boards per customer), but between this and driver quality issues, we're having a hard time making the business case for it.

And Alfonse, if you do like GL, I'd encourage you to tone it back a bit and treat others the way you'd like to be treated (I'm assuming you're not a masochist). Sometimes you spread so much judgement and "one-right-way" ego on these forums that new folks reading you might just walk off and use D3D. If that's not really your goal, take a chill pill man and stop ripping everybody around you a new one for not thinking exactly like you. I'm thankfully most GL devs don't treat others like you, or I sure wouldn't hang out here. So relax and lean off the one-right-way ego, and let's grow the user base here, not shrink it.

Alfonse Reinheart
01-05-2010, 06:52 PM
Even if we had core/ARB/EXT support for tesselation now, it'd be basically the same as having only ATI extension support now, because nobody can hack HW tessellation at present but ATI.

No, it wouldn't. Because if we had core or ARB support for tessellation now, and you wrote code for it now, it would work on ATI hardware and any future NVIDIA hardware. That's the whole point of putting something in the core or giving it an ARB extension designation. Whereas if I wrote code for NV_conditional_render, it doesn't automatically work on ATI hardware that supports conditional render. You have to use the core feature to get at it for ATI cards.

The only objective reason to use OpenGL over D3D is the fact that it is cross platform. It works on Windows, MacOSX, and Linux, as well as ATI and NVIDIA. If you want to do graphics on all of these platforms, OpenGL will do the job. Indeed, OpenGL is the only choice.

Now, if you want to use NVIDIA_GL (the entire NVIDIA ecosystem, from Cg to NV assembly to bindless to whatever), you are able to do this through the OpenGL extension mechanism. But do not convince yourself that this is anything even remotely cross-platform. It isn't OpenGL; it's just NVIDIA's API that they expose through OpenGL.

Making any extension does not constitute "OpenGL support." EXT_separate_shader, despite the good intention of the extension, is not "OpenGL support." It only works with the built-in values. It clashes with the design ideals of GLSL 1.30+.

You can only call an extension "OpenGL support" if it exposes something in a way that works with OpenGL (rather than against it, like Cg), and can reasonably be implemented by someone else. If NVIDIA were to come out with a tessellation extension that only works with Cg and the NV assembly, this does not constitute "OpenGL support."

devsh
01-06-2010, 02:21 AM
hmm.... nice discussion here.

but why is 3.5 planned and not 4.0???

cmon there was only 2.1 i think?

Heiko
01-06-2010, 03:25 AM
I cannot imagine we will see core tessellation in the OpenGL 3.x series because this would mean OpenGL 3.0, 3.1 and 3.2 could run on DX 10 hardware, but OpenGL 3.3 and above could not. As nvidia does not even have tesselation capable hardware yet, I'm sure they will do everything to prevent tessellation becoming a core feature at this moment (and as opposed to DirectX they are likely to be able to prevent such things, because they have a strong voice in OpenGL specification creation and OpenGL definitely needs nVidia onboard). However, I think an ARB extension could be a serious option for the 3.x series. As Alfonse said, an ARB extension would give the benefits of creating working code now for Ati hardware that works also on future hardware from all vendors.

On the other hand: nVidia is expected to have tessellation capable hardware in March or so, if all goes well from now on (which has to be seen, Fermi has had a lot of problems so far, it is not guaranteed no more problems will arise I guess). In March we might also see a new OpenGL version, if the same 6 month schedule is followed that we've seen since OpenGL 3.0 was released. With nVidia also having tessellation capable hardware (or more general: DX11 capable hardware), the doors are open for a new OpenGL version that gets on par with DX11.

Personally I'm looking forward to have ARB or core support for tessellation. I also would like OpenGL to gain multicore rendering support, I consider that as very important feature. I think it is too early for OpenGL 4. They might also decide to create OpenGL 3.5, with the version jump indicating newer hardware to be needed, but basically it would be a minor update from OpenGL 3.2, with just some stuff added to get on par with DX11.

OpenGL 4 could still be the API rewrite they promised a long time ago. I believe OpenGL 3.x as we see it now was just necessary to prepare us for an API change. I think an API rewrite is still a viable option, but since the uproar about OpenGL 3.0 they are just very quiet and don't talk about such a thing until it is done and ready to be released (August 2010??). In the meantime they make sure, with the 6 month release schedule, that OpenGL is and stays ready for an API switch (ready means: on par with hardware capabilities).

Jan
01-06-2010, 04:04 AM
I don't believe there will be an API rewrite, ever.

OpenGL 3.0 has the deprecation system introduced to be able to change the API gradually. Instead of one monolithic rewrite they figured changing the API piecewise would be much easier to accomplish and to have vendors support it.

Also i don't think nVidia would deliberately prevent an extension such as tesselation from being created, even if they can't support it right now. It would be extremely stupid, because with such a feature they know that they will have to support it eventually. I think it simply takes time to create a proper spec. They would actually benefit from creating the spec NOW because that gives them time to implement it in their drivers and have it ready when their hardware ships. Whether ATI supports it now, earlier than nVidia or ever doesn't matter, because the extension is really not that important. Even in DirectX-land apart from some tech-demos nobody uses it so far.

Every spec that deals with shaders is complicated in nature, and adding one (or two) entirely new shader stages has to be done carefully. And as far as i can see the ARB wants to do things right from the start, everything else just means more work and more headaches in the long run.

Jan.

zeoverlord
01-06-2010, 05:08 AM
I don't believe there will be an API rewrite, ever.
No, but at a point you come to a threshold where you have to take a larger step in order to say "here's our new baseline".

Just looking at the 3.x versions released so far it feels like a steady march towards an definite goal and once that is done you only have to state that "this is no longer 3.x but 4.0".

Heiko
01-06-2010, 06:25 AM
Also i don't think nVidia would deliberately prevent an extension such as tesselation from being created, even if they can't support it right now. It would be extremely stupid, because with such a feature they know that they will have to support it eventually. I think it simply takes time to create a proper spec. They would actually benefit from creating the spec NOW because that gives them time to implement it in their drivers and have it ready when their hardware ships.

I'm not saying nVidia will prevent making a spec for tessellation. What I'm saying is that I think they will prevent it becoming core until their hardware supports it.

Dark Photon
01-06-2010, 07:43 AM
The only objective reason to use OpenGL over D3D is the fact that it is cross platform.
This is one good reason, but not the only objective reason.

Another is that you don't want the continuing costs/burdens/security problems/update nightmares/stability issues/etc. that go along with maintaining an embedded graphics system on Microsoft Windows. It's a completely pointless headache.

Microsoft's "Where Do You Want To Go Today" has gotten lost, and they ain't going where we want to go. Except with D3D, but their aims are crystal clear. They alone control it, and they use it as a stick to make their user base jump through hoops (buy new OSs) when "they" want them to. Can you say, cattle prod? I feel sorry for ATI, NVidia, and Intel getting stuck in the middle of that mess.

So rejoice. There are multiple reasons to chose OpenGL, and we've only just touched on two of them here.

Alfonse Reinheart
01-06-2010, 09:14 PM
but why is 3.5 planned and not 4.0???

Who said it isn't? And we're only up to 3.2, so I don't know where you're getting 3.5 from.

If I were in charge of the ARB, I'd be looking at making 2 specs: 3.3 and 4.0. 3.3 would essentially be the important API improvements in core (there would be appropriate core extensions available too, for pre-DX10 hardware, though that's becoming increasingly scarce). 4.0 would strictly be for DX11 hardware.

They could probably get away without the 3.3 from my plan, just releasing the API improvements as core extensions. But putting things in core has often been a way of kicking certain implementers (*cough*ATI*cough*) in the pants to get implementing. And I want proper shader separation in every driver on the planet ASAP.


Another is that you don't want the continuing costs/burdens/security problems/update nightmares/stability issues/etc. that go along with maintaining an embedded graphics system on Microsoft Windows. It's a completely pointless headache.

And yet, almost every game developer is perfectly willing to put up with this "pointless headache". Indeed, even cross-platform games that get released on MacOSX or Linux often have a D3D path that they use on Windows. That's a maintainance nightmare. Yet they do it.

In fact, there's a company with significant ARB influence that specializes in doing just this type of port (they're one of the main motivators for extensions like ARB_provoking_vertex and ARB_vertex_array_brga. These make it easier to use the same vertex data between GL and D3D). You couldn't build an entire company around doing that kind of thing if D3D were a "completely pointless headache," or anything else you describe it to be.

pjmlp
01-07-2010, 04:56 AM
...
And yet, almost every game developer is perfectly willing to put up with this "pointless headache". Indeed, even cross-platform games that get released on MacOSX or Linux often have a D3D path that they use on Windows. That's a maintainance nightmare. Yet they do it.

In fact, there's a company with significant ARB influence that specializes in doing just this type of port (they're one of the main motivators for extensions like ARB_provoking_vertex and ARB_vertex_array_brga. These make it easier to use the same vertex data between GL and D3D). You couldn't build an entire company around doing that kind of thing if D3D were a "completely pointless headache," or anything else you describe it to be.


Many people criticize DX just because it is a Microsoft product tied to the the Windows platform. Many of whom never even cared to learn about it.

It is true that its COM based API and Hungarian notation is really bad compared how nice OpenGL looks like, but if you ever looked into OpenVMS code, it is actually not that bad. :)

And D3D is much more than just OpenGL is. It is a part of a complete gaming framework.

Not to forget that many gaming studios want to deliver their games to as many people as they can, and not everyone has a ATI or NVidia card.

Some of the people that complain about Microsot assume that everyone else but Microsoft is using OpenGL and that is also not true. Even Apple used to have its own, remember Quickdraw3D?

The game consoles have mostly OpenGL like APIs, but they are not OpenGL. The only I know that really has proper OpenGL support is the PS3 and there you have to use CG for your shaders. And many PS3 AAA just use the native graphics API instead of OpenGL.

The gaming industry is what sells most of the graphic cards nowadays, and most developers tend to use whatever APIs the target system has. If they need to have it on different platforms, they get subcontractors to do the porting. This is how the industry works and it won't change.

We should all the thanked to the mobile industry and in special to Apple, because now OpenGL is becoming relevant again for gaming development.

If ATI or NVidia lower their support for OpenGL, then the API will dye on Windows, regardless how important it is in other platforms.

Dark Photon
01-07-2010, 07:14 AM
Another [reason] is that you don't want the continuing costs/burdens/security problems/update nightmares/stability issues/etc. that go along with maintaining an embedded graphics system on Microsoft Windows. It's a completely pointless headache.
And yet, almost every game developer is perfectly willing to put up with this "pointless headache".
Yes, of course. PC game devs have to, since their users choose the hardware, and the devs just have to deal with it or lose a sale.

As developers of embedded systems, we are definitely in an unusual (and ideal) situation, where we pick the hardware/software for our customer because it's a full-system solution. They just don't care which chips, boards, hard drives, network cards, power supplies, cases, GPUs, driver versions, OSs, APIs, dev languages, debugging tools, etcetc. we use. Their requirements on us are that the system operates per requirements with a specified interface to the outside world with a specific uptime and certain initial and continuing cost limits. So whatever of the above list we want to use to get there is fair game.

pjmlp
01-08-2010, 04:24 AM
....
As developers of embedded systems, we are definitely in an unusual (and ideal) situation, where we pick the hardware/software for our customer because it's a full-system solution. They just don't care which chips, boards, hard drives, network cards, power supplies, cases, GPUs, driver versions, OSs, APIs, dev languages, debugging tools, etcetc. we use. Their requirements on us are that the system operates per requirements with a specified interface to the outside world with a specific uptime and certain initial and continuing cost limits. So whatever of the above list we want to use to get there is fair game.


But then, most embedded systems that I am aware of, make use of 3D chips that follow the OpenGL ES standard not the usual OpenGL one.

Tom Flynn
01-08-2010, 11:22 AM
[quote]
The gaming industry is what sells most of the graphic cards nowadays, ...


I'd like to challenge this assertion. I believe it was true maybe five years ago, but no longer true today. I've heard this as an excuse (games sell cards, games use D3D, so no reason to put a lot of effort into OpenGL) from employees at a major chipmaker whose OpenGL support is so crappy that most developers will revert to software OpenGL rather than use that vendor's crappy drivers.

With the exception of MMOs and flight sims, a large number of PC games nowadays begin life as a console game. The game is designed and developed on the 360 and then later ported to the PC and PS3. PC sales alone aren't enough to support the cost of modern AAA games. (http://www.shacknews.com/onearticle.x/53047).

There's no longer any reason to buy the latest and greatest graphics card to play the latest PC game. You can just buy that same game and play it on your console on your 52" plasma and 5.1 surround sound.

I think marketing departments of the major chip vendors need to re-evaluate who is buying all their cards/chips. Perhaps now it'll end up in OpenGL's favor and maybe lead to a renewed focus on the API. Or maybe that's just too optimistic.

Alfonse Reinheart
01-08-2010, 02:32 PM
I've heard this as an excuse (games sell cards, games use D3D, so no reason to put a lot of effort into OpenGL) from employees at a major chipmaker whose OpenGL support is so crappy that most developers will revert to software OpenGL rather than use that vendor's crappy drivers.

You could have just said "Intel." It's not like we don't know who you're talking about ;)


There's no longer any reason to buy the latest and greatest graphics card to play the latest PC game. You can just buy that same game and play it on your console on your 52" plasma and 5.1 surround sound.

Two things.

One: I am typing this on my computer, which is connected to a 100" projection screen with 5.1 surround sound.

Two: Buying a game for a console requires buying the console. And it can also lead to diminished experience. As a Team Fortress 2 player, I wouldn't even be playing the game anymore if I had the 360 or PS3 version. No updates and no mouse-look == fail.

And consoles aren't getting StarCraft II or Diablo 3 or WoW. Or Civilization (a real Civ game, not watered-down crap). Or quite a few other games.

Having a game-quality PC is still important. It may have diminished importance, but many people do it, and many games still require it.

Furthermore, let's say you're right. If you are, then that means that GPUs don't matter. It's not that a lessened focus on gaming will suddenly make OpenGL better. It means that people simply won't care about 3D graphics, because the primary uses of that is games. If you aren't playing games on your PC, you aren't using your card's 3D capabilities (outside of things like Aero Glass and such).

So doing no 3D on desktops isn't going to help OpenGL in the slightest.

Dark Photon
01-08-2010, 07:47 PM
Furthermore, let's say you're right. If you are, then that means that GPUs don't matter. ... If you aren't playing games on your PC, you aren't using your card's 3D capabilities (outside of things like Aero Glass and such).
This a totally ridiculous statement. The market is apparently a bigger place than you're aware of.

Alfonse Reinheart
01-08-2010, 07:54 PM
This a totally ridiculous statement. The market is apparently a bigger place than you're aware of.

OK, then. What do you need a Radeon HD 5850 for besides games? What will that do that a cheap, $50 G70 part from NVIDIA won't do?

And I'm not talking about niche markets either, like flight simulators, high-performance computing, and such. These people use machines built specifically for that purpose. I'm talking about desktop PCs, which are used by ordinary people.

Dark Photon
01-08-2010, 08:09 PM
And I'm not talking about niche markets either, like flight simulators, high-performance computing, and such. These people use machines built specifically for that purpose. I'm talking about desktop PCs, which are used by ordinary people.
Then you should have stated that. You asserted that if games aren't the driving force behind GPU sales on PCs (which probably isn't true, but whatever), then GPUs don't matter, and implicitly nothing but games applications of GPUs matters. To you maybe, but your world is small.

And once again, you're wrong. I've worked in both of your example markets (HPC/scientific visualization and commercial flight simulators), and I can tell you -- it used to be "machines specifically built for that purpose". Many that clung to them are dead or dying now. Today I believe, by and large, these apps are powered by plain-old PCs powered by off-the-shelf mass-market GPUs.

Alfonse Reinheart
01-08-2010, 08:32 PM
Today I believe, by and large, these apps are powered by plain-old PCs powered by off-the-shelf mass-market GPUs.

The bread and butter of the GPU industry is people who want to play games. It is gamers who drove down the cost of GPUs to the point where these apps can run from off-the-shelf GPUs. But there simply aren't enough of them to keep the GPU industry going.

Bottom line is this: if you take the gamers out of the GPU market entirely, you'll see GPU prices go up dramatically, simply from the massive lack of demand. Most people only care about GPUs with regard to their ability to play games.

Tom Flynn
01-09-2010, 02:17 AM
Apparently, my earlier attempt at a reply to this didn't go through. Let's try again...



Two things.
One: I am typing this on my computer, which is connected to a 100" projection screen with 5.1 surround sound.


Yeah, I'm sure most PC gamers have that setup too ;-P



Two: Buying a game for a console requires buying the console.


Are you really going to try to argue that buying a gaming PC and keeping it up to date (especially the graphics card) is somehow cheaper than buying a single console once every 5-6 years?? A console, at most, will cost you $600. A really good graphics card alone can cost that much.



And consoles aren't getting StarCraft II or Diablo 3 or WoW. Or Civilization (a real Civ game, not watered-down crap). Or quite a few other games.


Right, and we're back to MMOs and sims. If those are your type of game, then you're going to be playing them on a PC. Otherwise, you're probably better off on a console. But also take note that those particular games aren't a driving factor for buying new GPUs either. You can get by pretty easily with a low-end card to play those games. And it won't be watered down either since WoW and StarCraft and Civ were designed to run on cards from 5 years ago.



Furthermore, let's say you're right. If you are, then that means that GPUs don't matter.


No, it means that new GPUs for PC games don't matter. If you take a look, many if not most new PC games still only require DX9. Why bother with a DX11 GPU?



It's not that a lessened focus on gaming will suddenly make OpenGL better. It means that people simply won't care about 3D graphics, because the primary uses of that is games.


And that's the point I'm arguing. Games are no longer the sole user of 3D graphics on the PC. Perhaps my point of view is skewed because I work in graphics, but it seems more and more applications have or require some sort of 3d visualization of some kind. If fact, it seemed it was difficult to capture VC interest without having some sort of snazzy 3d visualization. And Dark Photon is right, applications that used to require and run on high-end SGIs or specialized IGs are mostly running on plain old PCs nowadays (flight simulators, medical imaging, oil & gas, CAD, GIS, and many more). And this is still not counting the emergence of 3d usage in consumer electronics. A lot of things from TVs to set-top boxes to cell phones require and use OpenGL for their UI and/or video playback applications.

My point is that the non-games markets have grown signficantly in the past few years. Perhaps to the point of eclipsing what remains of the PC games market. And if the majority of non-games markets are using OpenGL, then it would benefit OpenGL if the hw vendors realized what is going on.

Alfonse Reinheart
01-09-2010, 03:11 AM
Are you really going to try to argue that buying a gaming PC and keeping it up to date (especially the graphics card) is somehow cheaper than buying a single console once every 5-6 years?

It's strange that you bring that "5-6 years" up. Particularly since your later argument is that PC games aren't as demanding of new hardware as they used to be. It's entirely possible that an $800 gaming PC from 3 years ago would still be serviceable today. And it's quite possible that an $800 gaming rig from today would last a good 5-6 years.


Right, and we're back to MMOs and sims.

And FPS's, unless you honestly believe that the best FPS experience can be had on a console without mouse&keyboard support. And RTS games (which are not sims).

PC gaming is certainly not master of the gaming industry. But it is far from dead. In-store PC game sales are down, but that is partially due to the increasing prominence of online sellers like Steam.


If you take a look, many if not most new PC games still only require DX9. Why bother with a DX11 GPU?

Because it's faster at DX9 stuff than any DX9 GPU was. That was the reason why XP users bought DX10 parts too; because they were faster.

And most new PC games require DX9 as a minimum for three reasons:

1: Microsoft hurt DX10's growth by making it Vista-only. The launch failures and public perception of that OS stunted the willingness of game developers to make games that significantly use DX10.

2: Because it is prevalent. DX9 hardware has been available for a long time. Just look at the Steam hardware survey; virtually nobody gaming has DX8-only hardware, so there's no point in supporting it. DX10 adoption is relatively slow, though there are quite a lot of DX10 parts out there. DX9 is simply the current minimum level of functionality. Making a game that only 30% of people who might be interested can play is silly.

3: Outside of incremental hardware improvements (longer shaders, faster shaders, more uniforms, etc), there really isn't much difference from DX9 hardware to DX10 hardware. Geometry shaders aren't that big of a deal; they're too slow in practice to be useful. And there weren't very many other big ticket "must have" DX10 features that would fundamentally change how you make effects. DX10 hardware is used pretty much like DX9, except with more complex shaders. So making DX9 a baseline minimum is quite easy.


A lot of things from TVs to set-top boxes to cell phones require and use OpenGL for their UI and/or video playback applications.

I don't know of a single TV that uses OpenGL; I don't even know how it could, since you're not allowed to run code on them. And I don't know of a single cell phone that uses OpenGL either. I know of a few that use OpenGL ES, but that's not the same thing as OpenGL.

I wish that it were. I'm sure ATI could do a much better job writing GL ES drivers for Windows than their current GL drivers. But unfortunately, OpenGL ES is only available for embedded systems.


Perhaps my point of view is skewed because I work in graphics, but it seems more and more applications have or require some sort of 3d visualization of some kind.

Such as? The "standard" applications, Office suites, e-mail programs and the like, have no need for 3D. And even if they do, it's nothing that a GPU from half a decade ago couldn't render with ease.


And Dark Photon is right, applications that used to require and run on high-end SGIs or specialized IGs are mostly running on plain old PCs nowadays (flight simulators, medical imaging, oil & gas, CAD, GIS, and many more).

And you really think that, if PC gamers dropped off the face of the planet, the GPU industry could survive just fine off of these kinds of specialized applications?


My point is that the non-games markets have grown signficantly in the past few years. Perhaps to the point of eclipsing what remains of the PC games market.

No, it has not. What is OpenGL ES used for on iPhones/etc? Games. Oh, you might get an application here or there that draws something in polygons. But the majority of non-gaming applications use 2D effects.

Tom Flynn
01-09-2010, 04:05 AM
It's strange that you bring that "5-6 years" up. Particularly since your later argument is that PC games aren't as demanding of new hardware as they used to be. It's entirely possible that an $800 gaming PC from 3 years ago would still be serviceable today. And it's quite possible that an $800 gaming rig from today would last a good 5-6 years.

Your original point was that PC games drive buying the latest GPUs. New GPUs are come out every 6 months to a year. If you're not buying those GPUs on a regular basis to put into your gaming rig to play games, then games are not driving the sales of GPUs. And that proves my point.



And FPS's, unless you honestly believe that the best FPS experience can be had on a console without mouse&keyboard support.

I play Unreal Tournament on my PS3 with keyboard and mouse. There's really no reason for a developer to not support keyboard and mouse on consoles (especially Unreal Engine based games).



And RTS games (which are not sims).

Close enough. In my opinion, they're as equally boring and require just as much time that I simply don't have. There's a market for people who like those games and they can have it ;)



PC gaming is certainly not master of the gaming industry. But it is far from dead.

I never claimed it was dead. But it is smaller than it was several years ago.



Because it's faster at DX9 stuff than any DX9 GPU was. That was the reason why XP users bought DX10 parts too; because they were faster.

And most new PC games require DX9 as a minimum for three reasons:

1: Microsoft hurt DX10's growth by making it Vista-only. The launch failures and public perception of that OS stunted the willingness of game developers to make games that significantly use DX10.

2: Because it is prevalent. DX9 hardware has been available for a long time. Just look at the Steam hardware survey; virtually nobody gaming has DX8-only hardware, so there's no point in supporting it. DX10 adoption is relatively slow, though there are quite a lot of DX10 parts out there. DX9 is simply the current minimum level of functionality. Making a game that only 30% of people who might be interested can play is silly.

3: Outside of incremental hardware improvements (longer shaders, faster shaders, more uniforms, etc), there really isn't much difference from DX9 hardware to DX10 hardware. Geometry shaders aren't that big of a deal; they're too slow in practice to be useful. And there weren't very many other big ticket "must have" DX10 features that would fundamentally change how you make effects. DX10 hardware is used pretty much like DX9, except with more complex shaders. So making DX9 a baseline minimum is quite easy.

Pretty much proving my point that games alone aren't driving the sales of DX11 hw. Games aren't taking advantage of what's available to them in DX10/DX11 because of the reasons you state.



I don't know of a single TV that uses OpenGL;

I do. I've worked for two consumer electronics companies that have shipped TVs that use OpenGL for their UI. Ditto for the set-top boxes.


And I don't know of a single cell phone that uses OpenGL either. I know of a few that use OpenGL ES, but that's not the same thing as OpenGL.

ES2.0 is close enough. It has most of the same restrictions as OGL3.2. It's not missing that many core features compared to desktop OpenGL (though is missing some). At my current job, I've got a fairly significant codebase that uses the exact same OpenGL code for both ES2 and desktop OpenGL.



Such as? The "standard" applications, Office suites, e-mail programs and the like, have no need for 3D. And even if they do, it's nothing that a GPU from half a decade ago couldn't render with ease.

Last year I was at a startup doing GIS. Until very recently, 2D was the pretty much the standard in GIS visualizations. That's changing now. Oh, and just because an application doesn't _need_ 3d doesn't mean it's not going to require it. Why the heck does Javascript _need_ OpenGL bindings?? But it has them via WebGL.



And you really think that, if PC gamers dropped off the face of the planet, the GPU industry could survive just fine off of these kinds of specialized applications?

I never said anything about PC gamers dropping off the face of the planet. Only you did. I did suggest that the entirety of non-games markets including consumer electonics have perhaps surpassed the PC games market and is worthy of appropriate attention by hw vendors.



No, it has not. What is OpenGL ES used for on iPhones/etc? Games. Oh, you might get an application here or there that draws something in polygons. But the majority of non-gaming applications use 2D effects.
More games using OpenGL only help my point that hw vendors need to stand up and pay better attention to the API rather than ignoring it in favor of D3D.

Lord crc
01-09-2010, 04:32 AM
My gut feeling is that games are still the primary driver of GPU sales. However most gamers can play their games at max quality using "only" a low to midrange card.

The lack of need for a high-end GPU to play games must be the primary driver for NVIDIA and ATI to push GPGPU. It will be a huge new market for beefy GPUs, which I suspect will easily surpass the hardcore gaming niche.

Currently OpenCL is looking very attractive I think, so OpenGL could benefit indirectly when OpenCL-OpenGL interaction is added to the specs.

Brolingstanz
01-09-2010, 04:46 AM
I think itís safe to say that as long as there are PCs people will play games on them. :-)

Seems to me the arguments so far are different branches of the same tree. Weíve got 3 major technologies at work here: computation, display and human interface. The business models weíve seen so far tend to capitalize on an unavoidable economy of scale; from the cell phones to the notebooks to the laptops to the consoles to Alfonse Reinheartís main frame and home theater - they each appeal to a particular age group, interest and budget.

Now word on the street is that weíll eventually see these technologies funnel into a single ubiquitous virtual experience in the form of display implants, direct neural-link human interfaces and such - you know, cybernetics. In the meantime quantum, optical, molecular or something completely different could steal the show in the next decade or 2. Though the issue of scale will likely persist in its various forms and will doubtless serve as bases for the unseen markets of tomorrow.

. . .

Jan
01-09-2010, 05:44 AM
Bla bla bla, bla bla, bla bla.

Just my two cent.

Oh, did anyone so far mention this:

http://tech.slashdot.org/story/10/01/08/1830222/Why-You-Should-Use-OpenGL-and-Not-DirectX?art_pos=15

It's hilarious because the guy "debunks MS'es false information" and spreads a whole lot of false information himself. But well, it's good advertisement for his awesome little company that will rule the world, yeah!


Now please continue.
Jan.

Alfonse Reinheart
01-09-2010, 01:37 PM
Your original argument was that hardware makers catering to the non-gaming crowd will miraculously cause their OpenGL drivers to get better. I don't see this happening, for two reasons.

There are two groups of non-gamers interested in GPUs: people doing visualization, and people doing GPGPU. GPGPU needs OpenCL, not OpenGL, so catering to that crowd will divert resources away from OpenGL drivers.

Visualization users generally are like Dark Photon: they tend to have complete control over what hardware that their users use. So really, driver quality is important only to the degree that their code works for the given platform.


Your original point was that PC games drive buying the latest GPUs. New GPUs are come out every 6 months to a year. If you're not buying those GPUs on a regular basis to put into your gaming rig to play games, then games are not driving the sales of GPUs. And that proves my point.

My point was that PC games drive buying of GPUs period. Whether gamers are buying the latest or not, they're still the primary deliberate consumer of GPUs. While a few gamers have been willing to buy $400+ cards, you will find that the sales curve has always skewed to the $100-$200 range.


Games aren't taking advantage of what's available to them in DX10/DX11 because of the reasons you state.

As I pointed out, there's not much functionality difference between DX9 and DX11, so there's not much to take advantage of. Also, as I pointed out, there is a substantial performance difference between DX9 cards and DX11 cards, which game developers are taking advantage of.


Close enough. In my opinion, they're as equally boring and require just as much time that I simply don't have.

The entire nation of South Korea would like to disagree with you.


ES2.0 is close enough. It has most of the same restrictions as OGL3.2. It's not missing that many core features compared to desktop OpenGL (though is missing some). At my current job, I've got a fairly significant codebase that uses the exact same OpenGL code for both ES2 and desktop OpenGL.

And yet, they are not the same. They are compatible to a degree, but they're not the same thing. ES 2.0 has none of the legacy cruft that GL 3.2 does. That's why you'll find ES 2.0 implemented on various bits of hardware that you'd never see a legitimate GL 3.2 on.


Why the heck does Javascript _need_ OpenGL bindings?? But it has them via WebGL.

Games. Web applications are becoming an increasingly big thing. Once you can do client-side OpenGL rendering, you can run JavaScript-based games in a web browser.

Granted, lack of Internet Explorer support is pretty much going to make WebGL stillborn. But it's a good idea anyway.

And again, it is OpenGL ES, not regular OpenGL.


The lack of need for a high-end GPU to play games must be the primary driver for NVIDIA and ATI to push GPGPU. It will be a huge new market for beefy GPUs, which I suspect will easily surpass the hardcore gaming niche.

Outside of entities doing serious number crunching, what good is GPGPU to the average user? The most you can get out of it is accelerated movie compression. That's useful to a degree. But I don't think there are very many actual human being is going to buy an HD 5850 just to make

Of course, the HD 5850 does include double-precision computations, which is something the HPC people have been pushing for. However, catering to the GPGPU crowd doesn't mean improving OpenGL; these people want to use OpenCL.


so OpenGL could benefit indirectly when OpenCL-OpenGL interaction is added to the specs.

OpenGL (and D3D) interaction is already part of the OpenCL spec.


Oh, did anyone so far mention this:

No. It's sufficiently stupid (as you rightfully point out) that it doesn't deserve mention. I still don't know why that was linked on the OpenGL.org main page.

CatDog
01-10-2010, 04:49 AM
Visualization users generally are like Dark Photon: they tend to have complete control over what hardware that their users use. So really, driver quality is important only to the degree that their code works for the given platform.
If only this were true...

CatDog

MrKaktus
01-11-2010, 02:48 PM
There won't be 3.5. Only 3.3 and 4.0 ath the same time :).

EDIT: Oops, sorry, didn't see that there are another pages.

Igor Levicki
01-11-2010, 05:36 PM
OpenGL has become bloated. Each vendor is pushing their own set of features and proprietary extensions instead of adding to the ARB set. New major version should better cut down on this and get things sorted out or the OpenGL is going to die.

Dark Photon
01-11-2010, 06:07 PM
OpenGL has become bloated. Each vendor is pushing their own set of features and proprietary extensions instead of adding to the ARB set. New major version should better cut down on this and get things sorted out or the OpenGL is going to die.

You're entitled to your own opinion, but without some substantiation (specific examples), it's unlikely to be taken seriously.

Make this criticism constructive by suggesting a specific change.

Brolingstanz
01-11-2010, 10:38 PM
If bloat doesnít float your boat, look at 3.2 core. Lots of older functionality deprecated and removed. Very clean (downright squeaky).

Groovounet
01-12-2010, 03:28 AM
I'm going to say something weird if it was said a year ago:

OpenGL is just doing fine.

Really

Dark Photon
01-12-2010, 05:12 AM
OpenGL is just doing fine.
Really
Agreed.

pudman
01-12-2010, 07:51 AM
OpenGL has become bloated.

Nice and subjective. I like it!


Each vendor...
Vendor of what, exactly?


...is pushing their own set of features and proprietary extensions instead of adding to the ARB set.
So... vendor of GL extensions? In the past decade there's been practically three companies that have registered extensions: nvidia, amd, and apple.

I'm not so clear on how these companies have "pushed" their extensions (and therefore features). Marketing? This is OpenGL, marketing doesn't exist. Use-my-feature-or-die strong arming? Developers seem quite content to use DX so strong arm tactics would be silly. "Feature X only available on Y hardware": If it's not core most developers won't use it unless they have control over customers' hardware.

Basically, you are "pushing" your argument into crazy town. (Sorry, details in crazy town are few and far between so I can't expound on this.)


New major version should better cut down on this and get things sorted out or the OpenGL is going to die.

True. The next version should do away with extensions to do away with the advantage they give certain hardware/companies. Extensions are like cancer, slowly eating away at good APIs. I say we practice modern techniques, such as (if I may extend my metaphor) performing gene therapy on GL by injecting it with DX. Gene therapy always works.

Then GL will LIVE. (Or become a zombie and kill us all, it depends on Fermi, I think.)

Alfonse Reinheart
01-12-2010, 11:34 AM
Each vendor is pushing their own set of features and proprietary extensions instead of adding to the ARB set.

You've obviously not seen things in the days before GLSL and ARB_vertex_buffer_object.

On the NVIDIA side, you had NV_register_combiner, the NV_texture_shader suite, NV_vertex_program, and NV_vertex_array_range.

On the ATI side, you had ATI_fragment_shader, ATI_vertex_array_object, and EXT_vertex_shader.

All of these extensions worked in completely different ways. Coding for these different paths was a nightmare.

Even the specter of bindless graphics is nothing compared to this hodge-podge of stuff. There are very few proprietary extensions that expose useful hardware features that are not exposed by ARB extensions and/or core functionality.

V-man
01-13-2010, 08:02 AM
Bla bla bla, bla bla, bla bla.

Just my two cent.

Oh, did anyone so far mention this:

http://tech.slashdot.org/story/10/01/08/1830222/Why-You-Should-Use-OpenGL-and-Not-DirectX?art_pos=15

It's hilarious because the guy "debunks MS'es false information" and spreads a whole lot of false information himself. But well, it's good advertisement for his awesome little company that will rule the world, yeah!


Now please continue.
Jan.

Nice to see a game developer who prefers GL but why do so called professionals pretend to be stupid or are misinformed. Perhaps he is new?

Wii doesn't have GL or GL ES.
PS3 has GL and is layered but he conveniently forgets to mention that it is not used. Also, it is GL ES, not GL.
Ok, let's says GL and GL ES are near similar.

OSX is not popular and there are very few games for it.
Ditto for Linux.

Perhaps Sony should invest in their GL ES. Perhaps Nintendo should provide GL ES as well. Add to that all the cellphones and you have a pretty big momentum behind GL.

Yeah yeah, GL 3.2 is great.
If only the wgl and glX mess could be cleaned up. That crap makes GL seem non crossplatform.

Alfonse Reinheart
01-13-2010, 12:46 PM
If only the wgl and glX mess could be cleaned up. That crap makes GL seem non crossplatform.

Initialization will always, at some level, have to be platform-specific. Different platforms don't use the same kind of objects to talk about their drawing surfaces. So on some level, you will have to have platform-specific API calls to initialize platform-neutral rendering APIs.

Unless you're going to have this platform initialization actually create the window itself. In which case, you've lost a lot of flexibility.

pjmlp
01-13-2010, 02:31 PM
Nice to see a game developer who prefers GL but why do so called professionals pretend to be stupid or are misinformed. Perhaps he is new?

Wii doesn't have GL or GL ES.
PS3 has GL and is layered but he conveniently forgets to mention that it is not used. Also, it is GL ES, not GL.
Ok, let's says GL and GL ES are near similar.

OSX is not popular and there are very few games for it.
Ditto for Linux.

...


I really don't know why so many people think that OpenGL is the official graphics API in every computer in this planet besides Windows.

It would be a very good scenario, but unfortunately it is not like that, specially in what concerns game consoles.

Dark Photon
01-13-2010, 02:38 PM
I really don't know why so many people think that OpenGL is the official graphics API in every computer in this planet besides Windows.
Eeeeh, stop your wining and just accept it. You know it's true. :D

Wabbit season!

Stephen A
01-13-2010, 02:40 PM
If only the wgl and glX mess could be cleaned up. That crap makes GL seem non crossplatform.

Initialization will always, at some level, have to be platform-specific. Different platforms don't use the same kind of objects to talk about their drawing surfaces. So on some level, you will have to have platform-specific API calls to initialize platform-neutral rendering APIs.

Unless you're going to have this platform initialization actually create the window itself. In which case, you've lost a lot of flexibility.

No need for such drastic measures.

EGL has already managed to solve the problem of providing a sane, cross-platform interface: the platform-specific code is constrained to type definitions for window handles and display connections - everything else is common on all platforms (win32, x, any other EGL-capable device).

glfreak
01-15-2010, 01:28 PM
Well I see it differently. On hardware feature side, OpenGL wins through extensions. What does matter is how much reliable that API is compared to the other. HW support, consistency in rendering results, and performance on lower end HW. In the later case, OpenGL loses.

CAD/CAM are different story here. The target is mainly high end hardware which is not embedded graphics board or a gaming laptop.
It's a workstation for an artist.

IMHO OpenGL should not worry too much about catching up with DX10 or 11...it should focus on how to simplify and improve the core spec. of both the API and shading language to make it easier to create a stable solid reliable drivers.

I believe we could stick to GL 1.1 plus the GLSL functionality.
Then advance on the shader side. Just keep it simple.

I would rather have a GL 1.1 working drivers on all computers with no shaders...than having GL 3.2 working on only high end cards and yet buggy.

Igor Levicki
01-15-2010, 01:30 PM
Nice and subjective. I like it!

What is so subjective about saying that OpenGL is bloated?

Have you checked the size of glext.h recently? It's freaking 500 KB!

That beats winnt.h by 10%, and Winnt.h contains structure definitions, some inline code and comments.

I am sorry, but having ~1,600 (rough estimate) functions in an API is bloated in my book regardless of how good the API (or its creator's intentions) might be.


Vendor of what, exactly?

Vendor of GPU, you know, those little things in square ceramic packages with pins which use lot of power and give out a ton of heat and which are used to run OpenGL programs?


I'm not so clear on how these companies have "pushed" their extensions (and therefore features).

I don't know... how about by being influential members of OpenGL ARB?


This is OpenGL, marketing doesn't exist.

Then what's Khronos Board of Promoters for?


Use-my-feature-or-die strong arming?

You don't have to strong-arm someone into using your feature if your feature is the only one available.


Developers seem quite content to use DX so strong arm tactics would be silly.

They do because using OpenGL brings no advantage anymore.

1. Cross-platform argument is moot because:

- 95% of computers with a GPU capable of running OpenGL run Windows.
- You still need to use OS specific features (wgl on Windows, etc) so cross-platform still doesn't mean single code path.
- There is no incenitive for software companies to port games and commercial applications to an operating system such as Linux where the majority of users wouldn't pay for a game or an application because they are either cheapskates (free as in beer) or they hate closed source.

2. Having support for latest hardware features is moot because:

- Nobody wants to use those features unless all GPU vendors support them (look at the tesselation as an example).
- Having hardware support and API doesn't mean it will work (drivers fudge factor).


The next version should do away with extensions to do away with the advantage they give certain hardware/companies.

No need for being sarcastic, I get your point.

Unfortunately you are right, extensions are good only for hardware companies pushing them instead of being good for developers. Figuring out how to use some of the latest extensions must be the #1 reason why so many developers prefer DirectX even though it has its own set of flaws.

Disclaimer: I do not prefer DirectX.

Stephen A
01-15-2010, 05:13 PM
I am sorry, but having ~1,600 (rough estimate) functions in an API is bloated in my book regardless of how good the API (or its creator's intentions) might be.

Make it 1900 (as counted on the release of the 3.2 specs, the total is even higher now). The deprecation model reduces that number a little, but the number is still huge.

Alfonse Reinheart
01-15-2010, 07:15 PM
I am sorry, but having ~1,600 (rough estimate) functions in an API is bloated in my book regardless of how good the API (or its creator's intentions) might be.

Make it 1900 (as counted on the release of the 3.2 specs, the total is even higher now). The deprecation model reduces that number a little, but the number is still huge.

The deprecation model covers core OpenGL. glext.h has every possible API call from every extension. You're not going to use most of them.

Furthermore, a header file is not what matters to an API's "bloatedness". What makes an API bloated is what is necessary to use the API. Having to use lots of function calls to do something relatively simple is bloated. Redundant APIs are bloated.

The mere existence of hundreds of extensions is not bloat; it's just a file's size. Indeed, if you want, you can generate a lean&mean GL header that provides only the APIs and extensions that you want.

kRogue
01-15-2010, 09:33 PM
just a comment:

gl3.h has only 323 function entry points... that omits all extensions and all deprecated functions though. However counting every function in gl.h and glext.h is going to make alot of repeats with extensions that were promoted vendor specific ->EXT ->ARB-> core.

Also, a previous comment that DX10 features are not a big deal is complete utter composte:

Native integer support in shaders is an incredibly big deal.
Mixed formats in render to texture is also an incredibly big deal.
Geometry shaders are a big deal (toon shading, funky per-primitive dependent lighting, stencil shadows, etc).
The DX10 analogues of texture buffer and uniform buffer objects are also a really, really big deal.

Notice something here about the features though: they are in GL3 and were all exposed as extensions a long time ago too.

For those that complain about writing for extensions makes life hard, what about the cruel fact that in DX land, different code paths are necessary for different drivers issues for ATI and nVidia (and giggles, Intel)? GL is great in that hardware makers can expose functionality of their hardware *now*. Moreover, if you are in a situation where you dictate what hardware is to be used, GL extensions are absolutely wonderful.

That Alfonse fears nVidia's bindless graphics is just bizarre. I think he just did not get it. The bits about using it for vertex attributes is uber-easy to hide behind an inlined wrapper layer for pete's sake. (The more interesting bit of pointers in shaders though is so kick ass).

On the DX9 vs DX10 deal: that lot's of games only require DX9 does not imply DX10 is not in use widely: a great deal number of games have both a DX9 and DX10 renderer.

My thoughts are that tessellation is kind of over-rated, also keep in mind GPU tessellation in a variety of forms is something that ATI hardware has had for a very, very long time (any one remember ATI TrueForm?). But who knows, maybe it will be a real big deal and make LOD a heck of a lot easier.

As for the whole agl, wgl, glX, egl mess: it is really small fries. There are a variety of libraries out there that handle that for you (SDL, Qt, etc) so it really is not a big deal.

ZbuffeR
01-16-2010, 08:54 AM
I am reading this thread since quite some time, watching the SNR start low, stay low.

It just went up significantly thanks to kRogue post.

Alfonse Reinheart
01-16-2010, 03:55 PM
Native integer support in shaders is an incredibly big deal.
Mixed formats in render to texture is also an incredibly big deal.
Geometry shaders are a big deal (toon shading, funky per-primitive dependent lighting, stencil shadows, etc).
The DX10 analogues of texture buffer and uniform buffer objects are also a really, really big deal.

Compare this to the difference between DX8 and DX9. Fragment shader in particular went from being essentially highly-configurable fixed-functionality to full-blown programs in a real language. The utility of this is absolutely monstrous.

The difference between DX9 and DX10 is certainly there. But it is nothing compared to the DX8-9 transition.

Take native integer support. How big of a deal is that, exactly? You get some bit-manipulation stuff. But how much can you really do that you absolutely couldn't do before? Cartoon rendering has been done since the DX8 days, so clearly it isn't dependent on geometry shaders. The same goes for stencil shadows. You might have better or faster versions of these with geometry shaders (though considering the performance of GS's, I highly doubt the latter). But you could still do it before. You can work around the DX9 deficiencies one way or another.

Again, I'm not saying that DX10 features are negligible or unimportant. But you could live without it if you had to. And if supporting DX10 means spending a lot of money just to reach the 5% of people (at the time) who had a DX10 card and Vista, there was simply little reason to do so.

It's a clear sign of diminishing returns: 8-9 was much more significant than 9-10. Which is more significant than 10-11. By the time DX12 rolls around, about the only significant advances left would be honest-to-God recursion and pointers in shaders, and framebuffer readback in the fragment shader/blend shaders.

Also, a small factual point. I'm pretty sure that multiple render targets of different formats has nothing to do with DX10. Well, not the hardware itself anyway; I don't know enough about the Direct3D API to know whether D3D9 limited MRTs in this way. ARB_framebuffer_object has been implemented on some DX9 hardware just fine, and it doesn't require all color images to use the same internal format. That restriction is clearly something that was not present on at least some DX9 hardware.


For those that complain about writing for extensions makes life hard, what about the cruel fact that in DX land, different code paths are necessary for different drivers issues for ATI and nVidia (and giggles, Intel)?

I'm not sure I understand what you're getting at here. On both platforms, driver bugs require workarounds. This is true.

However on OpenGL, simple use of supposedly basic features can require workarounds. Not because of driver bugs, but because they implement different extensions that expose the functionality. You have to use EXT_geometry_shader4 on MacOSX, because they're haven't implemented ARB_geometry_shader4 or 3.2 core geometry shading. So enter another codepath.

Compound this with the fact that ATI drivers on OpenGL are objectively buggier than ATI's D3D drivers.

Yes, you will have to do workarounds and special code paths on both D3D and OpenGL, for any serious rendering code. However, it is just as true that your OpenGL program will require more of these. Differences between NVIDIA and ATI, between Windows and Linux, between Windows, Linux and MacOSX. Lots and lots of codepaths and workarounds.

And remember: the whole purpose of a cross-platform API is to be able to write code for one platform, and have it run on another with minimal effort. If you have to rewrite half of your code to work on another platform, it becomes very difficult to argue that the API is really cross-platform.


I am reading this thread since quite some time, watching the SNR start low, stay low.

I was unaware that Signal meant "comments I agree with" and Noise meant "comments I disagree with."

Igor Levicki
01-16-2010, 06:28 PM
glext.h has every possible API call from every extension. You're not going to use most of them.

Me not using those extensions does not make them non-existent nor non-implemented. They are there, and this phenomenon of API bloat has been called feature creep (http://en.wikipedia.org/wiki/Feature_creep).


What makes an API bloated is what is necessary to use the API. Having to use lots of function calls to do something relatively simple is bloated. Redundant APIs are bloated.

That is exactly OpenGL you are speaking of, and "relatively simple" is a relative term anyway.

Let us see how many function calls it takes to enable multisample support on Windows:



RegisterClassEx()
CreateWindowEx()
GetDC()
ChoosePixelFormat()
SetPixelFormat()
wglCreateContext()
wglMakeCurrent()
wglGetProcAddress()
wglMakeCurrent()
wglDeleteContext()
ReleaseDC()
DestroyWindow()
UnregisterClass()

All the above just to get one pointer!

GetDC()
wglChoosePixelFormatARB() <- use the pointer
SetPixelFormat()

Now rinse and repeat:

wglCreateContext()
wglMakeCurrent()
...
glEnable(GL_TEXTURE_RECTANGLE_ARB) <- finally


That is 19 function calls (assuming that everything goes without a single error) just to be able to use multisampling. Hello?!?

I am sure someone will now say that this is platform specific initialization, and that most of the functions are not from the OpenGL API itself, but in my opinion that doesn't make the OpenGL design less hideous.

That was just an anti-aliased rendering context, how many function calls does it take to setup a single texture?

In my opinion way too many. I understand that OpenGL is essentially low-level API offering fine-grained control over the rendering process, but it starts to dangerously resemble Linux distros by including everything and the kitchen sink while staying hard to learn and use even to more experienced developers.

Don't even get me started on FBOs, 3D textures, and OpenGL interface to GLSL.


The mere existence of hundreds of extensions is not bloat; it's just a file's size.

When was the last time you looked at the OpenGL ICD file size?

How about atioglxx.dll and nvoglnt.dll which are 13 MB in size -- does that ring any alarm bells?


Indeed, if you want, you can generate a lean&amp;mean GL header that provides only the APIs and extensions that you want.

Yeah, the only problem is that with that many extensions (http://articles.latimes.com/2009/mar/16/health/he-choices16) I don't know which ones I want.

Alfonse Reinheart
01-16-2010, 09:59 PM
That was just an anti-aliased rendering context, how many function calls does it take to setup a single texture?

In my opinion way too many.

Really, if you're looking for bloat in the OpenGL API, you picked the worst example for it. Creating a texture is about the most efficient (in terms of API calls) thing OpenGL does. And all of the functions exist for entirely legitimate reasons.

You create a texture object. You then bind it for use. Then you give it a format + data (or just a format). Then you tell it how you want to use it (texture parameters).

Is it possible that there could be one (gigantic) function call to do all of these? Yes. But does it make sense that you might not want all of this to happen from one function? Yes. I might want to change texture parameters later; I'd hate to have to rebuild a texture just for that. Same goes for uploading parts of the image.

The most you could legitimately argue with is the creation of the texture object being separate from specifying the format (ie: being able to respecify the data for a texture, as well as having the concept of an "uninitialized" texture), and having to bind the texture in order to modify it (rather than an object-based API).

Both of those are holdovers from GL 1.0-style OpenGL Objects. I don't like them, but the ARB has tried to change it twice; both efforts failed.


Don't even get me started on FBOs, 3D textures, and OpenGL interface to GLSL.

We can talk about GLSL (mainly the shader/program dichotomy, though attaching textures to programs is rather esoteric). But 3D textures are no more complicated than 2D ones; they require the same number of functions. And FBOs... exactly what kind of API are you looking for here? I don't know of a way to make that any simpler without losing functionality.

OpenGL isn't perfect. It's not a particularly streamlined API. But the core 3.2 API is, in many places, quite reasonable.


How about atioglxx.dll and nvoglnt.dll which are 13 MB in size -- does that ring any alarm bells?

Should it? Considering that it contains a full preprocessor+compiler+linker, I would have expected it to be bigger.

I would also point out that implementations shoulder 100% of the backwards compatibility burden. In Direct3D, Microsoft shoulders much of that burden.

I'd say a good half of that file size is backwards compatibility. Handling old, antiquated functionality that nevertheless must be kept running. Code that deals with register combiners, texture shaders, ATI_fragment_program, the ARB assembly programs, and such. There are programs out there in common use that still call those things, so ATI and NVIDIA must continue to provide support for them.

I imagine if you looked at the equivalent ATI/NVIDIA offerings for MacOSX, they'd be much smaller.


Yeah, the only problem is that with that many extensions I don't know which ones I want.

The ones you want are the ones you need. If you need geometry shaders, you use ARB_geometry_shader4. If you need uniform buffers, you use ARB_uniform_buffer_object.

This stuff isn't hard to work out. You don't pick extensions based on whim; you pick extensions because you need the functionality they provide to do something.

Igor Levicki
01-17-2010, 10:21 AM
Is it possible that there could be one (gigantic) function call to do all of these? Yes. But does it make sense that you might not want all of this to happen from one function? Yes. I might want to change texture parameters later; I'd hate to have to rebuild a texture just for that. Same goes for uploading parts of the image.

Has it occured to you that there could be one function that accepts a structure pointer with all of the above parameters that are currently being set by using separate functions?

You could have a flags member in the structure to tell the function which members are valid (in other words, what you want the function to do -- e.g. just change parameters, rebuild, rebuild & change, etc), PIXELFORMATDESCRIPTOR is an example of how it should be done with textures as well.

There is one thing that pisses me off in any API -- functions with gazillions of parameters that are being passed by value.

They do not only look ugly and make the code harder to read and maintain, but are also less efficient than functions that receive a single structure pointer -- passing pointer to data is much more efficient than copying the data around.


We can talk about GLSL (mainly the shader/program dichotomy, though attaching textures to programs is rather esoteric). But 3D textures are no more complicated than 2D ones; they require the same number of functions. And FBOs... exactly what kind of API are you looking for here? I don't know of a way to make that any simpler without losing functionality.

I was talking about those three as a whole like when you are writing GPGPU applications. I haven't noticed any effort in making OpenGL easier to use for GPGPU in recent API revisions. I hope that it will be different with OpenCL interoperability.


Should it? Considering that it contains a full preprocessor+compiler+linker, I would have expected it to be bigger.

Actually compiler is (at least for NVIDIA) in a separate file -- nvcompiler.dll, which itself is 10.8 MB in size, 13 MB is just the current OpenGL implementation.


I'd say a good half of that file size is backwards compatibility. Handling old, antiquated functionality that nevertheless must be kept running. Code that deals with register combiners, texture shaders, ATI_fragment_program, the ARB assembly programs, and such. There are programs out there in common use that still call those things, so ATI and NVIDIA must continue to provide support for them.

I guess that it never occured to them that they could write modular display drivers where some parts are installed and used on demand?


I imagine if you looked at the equivalent ATI/NVIDIA offerings for MacOSX, they'd be much smaller.

You leave too much to imagination, I checked and the results are as follows:



GeForce7xxxGLDriver - 12.3 MB
GeForce8xxxGLDriver - 13.4 MB



You don't pick extensions based on whim; you pick extensions because you need the functionality they provide to do something.

Yes, unless there are several extensions that do the similar if not the same thing, and unless the extension you need is known to not work properly with certain vendor.

Ilian Dinev
01-17-2010, 11:22 AM
Pardon me while I download this 109MB DX9 redistributable, the 50MB DX10 one, the 200MB DX8 one, the 30MB DX7 one, etc etc. Oh wait, this new game crashes on start, asking for some "d3dx9_2437892.dll"; didn't I already download something like that last week? (looking from a user's perspective)

About GPGPU: how about transform-feedback? And the CL interop.

Which are those extensions?

Modular approach, download on demand? When HDDs are enormous?

Multiple func-args? You know you can put line-breaks in your code here and there, right?

Grasping for straws already, it seems :(.

Alfonse Reinheart
01-17-2010, 03:09 PM
Has it occured to you that there could be one function that accepts a structure pointer with all of the above parameters that are currently being set by using separate functions?

Very well. Let's say you do that.

Today is now the time of OpenGL version 1.1. The available texture parameters correspond to a struct that looks like this:



struct TextureParameters
{
GLenum rWrapMode;
GLenum sWrapMode;
GLenum tWrapMode;
GLint minMipMap;
GLint maxMipMap;
GLenum magFilter;
GLenum minFilter;
};


That's all well and good, yes? Now the GeForce 256 comes out. That means anisotropic filtering is now available. There is no place in this struct for a max anisotropy field.

In this world, you must now wait until a full API revision comes out that provides for this. It must revise the struct to add new fields for this. This changes the API. It makes it that much more difficult to support the old API and the new one (since they use different structs).

In the OpenGL world, an extension comes out with a new texture parameter enumerator. Your code now looks like this:



glTexParameteri(GL_TEXTURE_MAG_FILTER, ...);
glTexParameteri(GL_TEXTURE_MIN_FILTER, ...);
if(supportsAnisotropic)
glTexParameteri(GL_TEXTURE_MAX_ANISOTROPY_EXT, ...);


One if-statement. No new function pointers, no new structs. This code will work just fine on the old hardware and on the new. You don't have to wait for a central authority to say "OK, you can use this hardware now."

Now, I know you're going to say that this will just increase the "bloat" of the API, because the EXT extension will eventually be superceeded by the ARB and/or core version. Yes, it will eventually be superceeded.

However, when promoting extensions that undergo minimal API change (like when they promote anisotropic filtering to core), they don't change the enum values. GL_TEXTURE_MAX_ANISOTROPY_ARB will be the same value as GL_TEXTURE_MAX_ANISOTROPY_EXT, which will be the same value as GL_TEXTURE_MAX_ANISOTROPY (for core). So the only change your code would require is the if-condition.

The implementation's code size will not increase at all. All that happens is that you get another #define in your OpenGL headers.


I haven't noticed any effort in making OpenGL easier to use for GPGPU in recent API revisions.

OpenGL is a graphics API. GPGPU is for OpenCL, which is for general-purpose computations.

OpenGL should not have any efforts made to make it easier to do GPGPU stuff. GPGPU interop? Sure. But OpenGL should be for rendering things. Not GPGPU.


I guess that it never occured to them that they could write modular display drivers where some parts are installed and used on demand?

Why should they? I mean really, is that 13MB actually hurting your application in any significant way? The time and effort it would to modularize their .dll could be better spent dealing with driver bugs.

Your concept of "bloat" is simply not useful in the modern world where 1GB of RAM and virtual memory is the norm. 13MB is not "bloat". And an API that does not look the way you want it to look is not "bloat" either.

BTW, as Ilian pointed out, the D3D dlls aren't exactly small either.

Brolingstanz
01-17-2010, 10:09 PM
>> I hope that it will be different with OpenCL interoperability.

From a birdís eye view of things, CL interop really isnít all that different from DX compute. Apart from the convenience of a single shading language and compiler in DX, the basic operation seems essentially the same (inasmuch as being orthogonal to the task of generating fragments).

Back to building a card house using only my nose...

Xmas
01-18-2010, 10:53 AM
In the OpenGL world, an extension comes out with a new texture parameter enumerator. Your code now looks like this:



glTexParameteri(GL_TEXTURE_MAG_FILTER, ...);
glTexParameteri(GL_TEXTURE_MIN_FILTER, ...);
if(supportsAnisotropic)
glTexParameteri(GL_TEXTURE_MAX_ANISOTROPY_EXT, ...);


One if-statement. No new function pointers, no new structs. This code will work just fine on the old hardware and on the new. You don't have to wait for a central authority to say "OK, you can use this hardware now."
Extensibility is important, but you could achieve the same functionality and extensibility with attribute lists (like EGL does).

And texture creation is pretty inefficient. Maybe not so much regarding the numbers of calls (although I'd say it's bad there as well), but certainly when you look at what the implementation has to do internally. What's the point in being able to create incomplete textures again?

Igor Levicki
01-18-2010, 12:07 PM
I specifically said "like PIXELFORMATDESCRIPTOR".

That means (pseudo code):



TextureParameters [] = {
R_WRAP_MODE, CLAMP,
S_WRAP_MODE, CLAMP,
T_WRAP_MODE, CLAMP,
MIN_MIPMAP, 0,
MAX_MIPMAP, 3,
MAG_FILTER, LINEAR,
MIN_FILTER, NEAREST,
};



There is no place in this struct for a max anisotropy field.

It is if you do as I suggested:



...
MIN_FILTER, NEAREST,
MAX_ANISO, 16
};


Now you don't even need an IF or the querying if the anisotropy is supported through extension string -- you just put the parameter in there and the driver ignores it if doesn't understand it.


OpenGL is a graphics API. GPGPU is for OpenCL, which is for general-purpose computations.

OpenGL stopped being graphics API once the shaders became programmable. Same goes for DirectX. OpenCL didn't exist when I started writing GPGPU applications. It's shortsighted to stick to that deprecated point of view.


Why should they? I mean really, is that 13MB actually hurting your application in any significant way?

Yes it is.

It is bug ridden and its size and complexity is obviously well past the point of maintainability. It should be butchered to small, more manageable pieces and delegated to dedicated teams of programmers so that each team can focus on a particular part of functionality.

I like Creative's idea of modes in their new drivers -- Audio Creation mode, Gaming mode, Entertainment mode. Each of those modes requires different setup of the audio engine and the DSP.

I believe that the same approach could work for video drivers.

Alfonse Reinheart
01-18-2010, 12:29 PM
you just put the parameter in there and the driver ignores it if doesn't understand it.

Nope. An error should be generated if there is an invalid value. Just like with wglGetPixelFormatAttribivARB.

Furthermore, nobody is describing the OpenGL API as absolutely perfect. But glTexParameter is not causing driver bugs. The internal implementation costs of glTexParameter vs. the attribute list method are negligible at best.


OpenGL stopped being graphics API once the shaders became programmable. Same goes for DirectX. OpenCL didn't exist when I started writing GPGPU applications. It's shortsighted to stick to that deprecated point of view.

No, it isn't.

You used of OpenGL to do GPGPU simply because it was the only tool available. It wasn't because it was a good tool for it, and it wasn't because it was the right tool for it. It was simply what you had, and you did the best you could with what was around.

OpenGL is for graphics. That's what it is designed for. It's programmable pipeline exists to better render things. Yes, programmability can be co-opted to do general-purpose computation. But the API and pipeline exists to do graphics.

A graphics API should not be modified/changed just for GPGPU tasks. We have OpenCL for that now; an API that is far better at GPGPU than OpenGL.


It is bug ridden and its size and complexity is obviously well past the point of maintainability.

And why is that? It has nothing to do with the API; D3D has a similarly large runtime (if not larger, due to the backwards compatibility needs). So clearly, the cause must be that handling 3D rendering tasks with proper optimizations requires a lot of effort.

I'm not saying that OpenGL couldn't be better about it. Certainly the Longs Peak revision would have helped driver stability a lot. So would the original GL 2.0 revision. But guess what? Not gonna happen.

OpenGL will not get a complete rewrite. OpenGL will not get modifications to its basic, core API (replacing glTexParameter with attribute creation, allowing for immutable objects, etc). These things simply will not be done. It is better to focus on the things that can be done, rather than the things that can't. It sucks, but that's the way it is.


I like Creative's idea of modes in their new drivers

You just used "like" and "Creative's drivers" in the same sentence. Creative's drivers are almost the entire reason Microsoft banished hardware audio from DirectX in Vista/Win7. A lot of Windows instability could be traced to Creative's incredibly terrible drivers. Indeed, I can't recall a BSOD I got on XP that didn't involve audio drivers being the culprit.

I'll take ATI's near-OpenGL over anything Creative puts out.


I believe that the same approach could work for video drivers.

You mean like how OpenGL does graphics and OpenCL does GPGPU? Different uses for the same hardware, with different setups for the pipeline and execution models?

Igor Levicki
01-20-2010, 06:34 AM
Nope. An error should be generated if there is an invalid value. Just like with wglGetPixelFormatAttribivARB.

That would break new applications if you run them on an old driver. Warning perhaps, but not the error that would lead to the process termination.


The internal implementation costs of glTexParameter vs. the attribute list method are negligible at best.

Your view is API-centric, my view is developer-centric. To me it is easier to maintain parameter lists than dozens of glTexParameter() calls scattered all over the code. My point from the beginning is that the OpenGL API isn't developer friendly.

By trimming down and reorganizing the API, and by improving the documentation beyond dry "write to the spec or die" level, OpenGL might see more use in the future. If that "ain't gonna happen" as you say, then there is simply no future for OpenGL.


No, it isn't.

If you are right, then OpenGL should have stayed clear from any GPGPU features -- that is not what happened. Better interoperability is logical consequence and is expected to improve further.


A graphics API should not be modified/changed just for GPGPU tasks.

I never suggested that, I just said that better interoperability would be a plus.


You just used "like" and "Creative's drivers" in the same sentence.

Yes I did, and it seems that you have some prejudice when it comes to Creative.


Creative's drivers are almost the entire reason Microsoft banished hardware audio from DirectX in Vista/Win7.

That is a myth.

Microsoft has their own reasons. One of them was probably reducing complexity and maintainance costs of audio stack, and another one is most likely XBox which doesn't have EAX -- Microsoft didn't want games on XBox to sound worse than on a PC because people wouldn't buy consoles and console games.

Anyway, there is ALchemy which restores this functionality, and other sound card vendors are providing similar solutions as well.


A lot of Windows instability could be traced to Creative's incredibly terrible drivers. Indeed, I can't recall a BSOD I got on XP that didn't involve audio drivers being the culprit.

Most instability problems that got blamed on Creative audio driver were caused by non-compliant PCI bus implementations in non-Intel chipsets.

I owned several Creative cards (Live 5.1, Audigy 2 ZS, and now X-Fi Titanium Pro PCI-E), and never had a single BSOD but I only used Intel chipsets and boards from tier one manufacturers such as Asus and Gigabyte.

In fact, most of the Vista crashes were due to buggy video drivers from both vendors (with NVIDIA leading the pack).


I'll take ATI's near-OpenGL over anything Creative puts out.

I have both Creative X-FI and ATI HD 5850 in Windows XP and Windows 7 X64. One of them has a lot of issues, and it is not Creative.


Different uses for the same hardware, with different setups for the pipeline and execution models?

Something like that, yes. For example, work with CAD/CAM requires different capabilities than gaming or 2D.

Ilian Dinev
01-20-2010, 09:04 AM
To me it is easier to maintain parameter lists than dozens of glTexParameter() calls scattered all over the code. My point from the beginning is that the OpenGL API isn't developer friendly.
That's just a preference. Calls are easier to maintain for me, no need to watch-out for arraysize etc. Plus, streaming increased LOD of textures among frames, while gradually changing the bias is possible+easy.



I never suggested that, I just said that better interoperability would be a plus. Didn't look like that.




Yes I did, and it seems that you have some prejudice when it comes to Creative. I love Creative cards, write audio software and use their cards heavily, and my experience is: only Intel's GMA drivers steal the crown from Creative. The turning point is when several apps use DSound. On Asus mobos with Intel chipsets, too. Anyway their inability to handle multiple identical sndcards was speaking volumes, too.


Back to the subject:
DX11 cards won't be quick to adopt, so why the hurry? And besides, it's just one extra extension that isn't as game-changing as SM4 and the abandonment of fixed-func. SM4 cards have been selling at steady 22mil/quarter at average unit price of $180 for 2 years already, and the vast majority are quite happy with their 8800s and 4xx0s while newer cards don't justify price/performance for upgrade.
I start doubting you've used GL3.2 in pure context. It's slim, as strict as a Microsoft spec-sheet, ATI's implementation surpasses expectations, can use DX-format data, and has integrated objects that can be nicely optimized soon. You don't have access to deprecated API and extensions.
Now also look at the glBindXXX that you're hating, and measure its RDTSC performance. You'll see you can safely wrap it to look and work the way you want at no performance penalty.
Missing FX/#include/mathlibs/etc you can copy/paste/refactor from the net in a few minutes to the way you want, or just use some plug'n'play lib. Debugging tools are the pertaining problem, but imho it's almost as fast and easy to manually debug as finding the correct frame and data in PIX (no, my cases are not simplistic, I'm simply not spoiled).

Alfonse Reinheart
01-20-2010, 12:01 PM
That would break new applications if you run them on an old driver.

That's why the "if" is there. You need to be able to reject invalid input, so that the user of the program knows that they did something wrong.


Your view is API-centric, my view is developer-centric. To me it is easier to maintain parameter lists than dozens of glTexParameter() calls scattered all over the code.

Your view is more API-centric than mine. I have long since abstracted OpenGL out of my application. I have a nice API that uses objects and such, while under the hood it makes all of those OpenGL API calls.

Indeed, I would call it very poor coding style to have "dozens of glTexParameter() calls scattered all over the code." Your graphics calls should be localized within a single module, not scattered here and there. Your texture calls should have their own sub-module within that graphics module, not scattered here and there.


If you are right, then OpenGL should have stayed clear from any GPGPU features -- that is not what happened.

Name a single OpenGL feature that has no valid rendering use, and would therefore qualify as a "GPGPU feature." Shaders have a valid rendering use. Uniform buffers and texture buffers have a valid rendering use. Transform feedback has a valid rendering use. And so on.

Nothing in OpenGL exists specifically to serve the needs of GPGPU developers.


Yes I did, and it seems that you have some prejudice when it comes to Creative.

If by "prejudice" you mean "have had experience with, thus can make reasonable arguments against," then yes, I have "prejudice" when it comes to Creative. When your drivers are responsible for consistently crashing my machine, I become "prejudice" against you.

Brolingstanz
01-23-2010, 04:55 AM
>> Your view is API-centric, my view is developer-centric.

Whatís the saying? A camel is a horse designed by a committee?

Donít mean to imply anything by that other than to humorously restate the plainly obvious.

glfreak
02-04-2010, 10:19 AM
Better work on the shading language and catch up with HLSL instead.

The API is fine.

Brolingstanz
02-15-2010, 02:58 PM
Got pretty excited recently when I saw the token AMD_program_binary_Z400 output from a enum.spec tool. until I realized it was for ES only, that is. :-(

Why canít those ES weirdos have their own spec file? :-)

Igor Levicki
02-23-2010, 07:38 AM
That's why the "if" is there. You need to be able to reject invalid input, so that the user of the program knows that they did something wrong.

Except that the input is not invalid, the old driver simply doesn't know it is valid.


I have long since abstracted OpenGL out of my application. I have a nice API that uses objects and such, while under the hood it makes all of those OpenGL API calls.

You see, that's the problem. You, me, and who knows how many other developers are abstracting the OpenGL API to make it easier to use. Wouldn't it be better if that level of abstraction already existed so we don't have to reinvent the wheel?

If your garden hose is leaking in 1,000 places you turn off water instead of plugging the holes.


Indeed, I would call it very poor coding style to have "dozens of glTexParameter() calls scattered all over the code."

You are reading into my post too literally.

When I said "scattered all over the code" I was thinking about quantity, not locality.


Nothing in OpenGL exists specifically to serve the needs of GPGPU developers.

Then OpenGL shouldn't have accepted anything beyond fixed function hardwre. You say GPGPU has no place in OpenGL and at the same time you seem to be accepting programmability concept.

Alfonse Reinheart
02-23-2010, 12:05 PM
Except that the input is not invalid, the old driver simply doesn't know it is valid.

The driver decides what is valid. That's half the point of having validation. If the driver doesn't know that it is valid, then by definition, it is not valid.

Otherwise, everything is valid and there is no validity checking at all. That's really bad for getting good error reporting.


If your garden hose is leaking in 1,000 places you turn off water instead of plugging the holes.

Not if you want to actually use it as a hose. Turning off the water means "I give up."

Also, the purpose of an abstraction is not just to make an API look better. It is to improve locality of code and make maintaining that code easier. It also allows you to change the backend of some code without changes to the users of that code. For example, if I wanted to take my abstraction and have a Direct3D implementation, I could do so with no changes to any outside code.

Abstractions are a good idea regardless of the API. If I were using D3D, I'd have written an abstraction around that too.


When I said "scattered all over the code" I was thinking about quantity, not locality.

Same difference. There are about 15 enumerators that can be used with glTexParameter. That means that the most number of these calls you should have in your code is around 15. Maybe a few more for different variations of things. But that's it.


Then OpenGL shouldn't have accepted anything beyond fixed function hardwre. You say GPGPU has no place in OpenGL and at the same time you seem to be accepting programmability concept.

A graphics API needs to be able to do graphics things. That includes shaders.

Being able to program graphics tasks in shaders is not the same as being able to compute arbitrary things. Just look at OpenCL: it doesn't look anything like OpenGL.

OpenGL defines a pipeline with specific programmable stages, and very specific things that can and cannot be done within each stage. This is not what a GPGPU pipeline would look like. It is what a rasterization-based shader pipeline would look like. They are not the same thing.

Igor Levicki
02-23-2010, 01:31 PM
The driver decides what is valid. That's half the point of having validation. If the driver doesn't know that it is valid, then by definition, it is not valid.

It is not by definition, but by assumption: "if I don't see the Sun shining, then it must be night time" -- as if it can't be that the person is in a room without windows, blind or that there are heavy clouds outside.


Otherwise, everything is valid and there is no validity checking at all.

You are misunderstanding me -- there is a big difference between "everything is valid", and "i will ignore and report anything I cannot process but will keep processing what I know".


Not if you want to actually use it as a hose. Turning off the water means "I give up."

You are taking my words too literally again. What I meant is -- you turn water off for a short period of time until you replace the hose with a new one that doesn't leak.

Using hose as a hose makes sense if that hose does what it was built for -- deliver water to a point of your chosing.

This hose with a 1,000 holes leaves 1,000 people dwelling in the mud to accomplish the same goal.


A graphics API needs to be able to do graphics things. That includes shaders.

"Shaders" are just a fancy name for a bunch of high-level language constructs -- they are a programming language.

Programming languages were invented for general purpose computing, not for graphics.


OpenGL defines a pipeline with specific programmable stages, and very specific things that can and cannot be done within each stage.

I only wished for better interaction between those stages and GPGPU, nothing more.

ZbuffeR
02-23-2010, 02:05 PM
Just use OpenCL for GPGPU together with OpenGL for visualisation and get over it already.

GLSL alone is not Turing-complete. It works on very limited data.

Alfonse Reinheart
02-23-2010, 02:28 PM
It is not by definition, but by assumption: "if I don't see the Sun shining, then it must be night time" -- as if it can't be that the person is in a room without windows, blind or that there are heavy clouds outside.

Does it matter? If I pass a parameter that the current implementation of OpenGL can't understand, does it matter that a future one does or might?

No, it does not. What matters is that, as far as this OpenGL implementation is concerned, I have asked it to do something nonsensical.


You are misunderstanding me -- there is a big difference between "everything is valid", and "i will ignore and report anything I cannot process but will keep processing what I know".

But that's bad behavior.

If I give a process, any process, an atomic operation, I expect it to either all work, or all fail. If it fails, it should do none of what I ask for, because some of the other parts of that operation may depend on the existence of the non-existent part.

Just like calling a function with bad parameters, if any of the parameters are bad, the function throws an error and no state is changed.


What I meant is -- you turn water off for a short period of time until you replace the hose with a new one that doesn't leak.

Using hose as a hose makes sense if that hose does what it was built for -- deliver water to a point of your chosing.

And during that time, the time when you're trying to replace the hose, water isn't going where you need it to. That's part of the reason why Longs Peak failed; they weren't delivering water and people needed water.

A leaky hose may be leaky, but it is functional. Turning off the water ensures that it is non-functional. And if you actually want things to work, functional-but-leaky is still better than non-functional. You may leave "1,000 people dwelling in the mud" but you're still getting water where it is supposed to go.


"Shaders" are just a fancy name for a bunch of high-level language constructs -- they are a programming language.

Programming languages were invented for general purpose computing, not for graphics.

Nonsense. Programming languages were invented so that you could make machines that are more flexible than hardware could allow. That's how shaders came to be: they were the natural extension of more flexibility in various stages of the rendering pipeline.


I only wished for better interaction between those stages and GPGPU, nothing more.

And why should an API designed for graphics care about "better interaction" with an API designed for general purpose computation? All you need is for the two of them to access each other's data. Anything more is just changing the graphics API for the sake of something that has nothing to do with graphics.

Chris Lux
02-24-2010, 02:26 AM
Also, the purpose of an abstraction is not just to make an API look better. It is to improve locality of code and make maintaining that code easier. It also allows you to change the backend of some code without changes to the users of that code. For example, if I wanted to take my abstraction and have a Direct3D implementation, I could do so with no changes to any outside code.

Abstractions are a good idea regardless of the API. If I were using D3D, I'd have written an abstraction around that too.
would it be possible to take a look at the abstraction interface? i am very interested in different efficient OpenGL/D3D abstractions (one or both at the same time...).

regards
-chris