PDA

View Full Version : OpenGL 3 Updates



Pages : 1 2 3 [4] 5 6 7

Nicolai de Haan Brøgger
06-02-2008, 09:32 AM
Currently, we only have the resources to support one API for PerfHUD, but we'll definitely keep your feedback on OpenGL support in mind for the future.


Yeah, well I assume they mean man-power resources, because that's the only thing a company like NVidia could be missing?! If they wanted to provide better tools for GL, I guess they would start by buying out gremedy and integrate gDEBugger into their PerfSDK. The "...we only have the resources..." must really means, "...we do not want to spend resources on a niche API"?!


On AMD's tools, I can assume you are talking about GPU PerfStudio? I tried this ...

Yes, I was thinking of that, what a shame...

bobvodka
06-02-2008, 10:00 AM
Well, if they wanted to provide the tools it would be a matter of extending their PerfHUD for OpenGL; however this cost time and as such money and right now they want to improve and extend the D3D end of things as that's what companies are asking for.

Korval
06-02-2008, 10:30 AM
Maybe, maybe not, but prove it!

You're the one making the bold, outlandish claim. The burden of proof is on you.

Mars_999
06-02-2008, 10:38 AM
Stop saying id Tech is the only engine that runs on GL. WoW for Mac is OpenGL, and so is the Unreal Engine on Mac, in fact all Mac games are GL. So for every PC game that is DX based there is a GL version floating around somewhere, if it is on the Mac.

If you guys want tools for GL, buy a Mac, Apple has some decent FREE tools to use with a Mac that help track down bugs, bottlenecks, ect... and you could use that to help your PC code.

EvilOne
06-02-2008, 10:48 AM
Lol... the three most funny quotes from this and the "no newsletter" thread:

Third place goes to knackered: "Did I miss an update on GL3? I don't see how that's possible, I've been constantly hitting F5 since 2006."

Second place goes to korval: "I bet you that C++0x will have a full draft specification before GL 3.0 ships."

And the clear winner is pudman: "I'm sure Duke Nukem Forever will be completed in finite time as well. Just what the length of that finite time is ahead of time is the problem."

bobvodka
06-02-2008, 11:10 AM
If you guys want tools for GL, buy a Mac, Apple has some decent FREE tools to use with a Mac that help track down bugs, bottlenecks, ect... and you could use that to help your PC code.

The day I swap to an overpriced machine with an overated OS is the day you can find me on the roof of a local tall building singing "I belive I can fly!".

The Mac offers me NOTHING, hell afaik they don't even have an IDE which is comparible to Visual Studio (a major reason I won't use Linux either; Code::Blocks is 'adequate' at best).

Which still doesn't solve the 'no decent free tools' on the PC problem, which is still the major platform, gets hardware updates faster and would require tuning as well.

Zengar
06-02-2008, 11:45 AM
I switched to Mac recently and it is in no means overpriced or overrated :) A very nice, robust and convenient peace of machinery. Still, of course, it depends on what you do --- if you spend most time with LaTeX and R like me, Mac is very difficult to beat.

But anyway, Mac is not a gaming platform. Hopefully, it will one day be as popular as a PC, then the things may change.

And a side note: such reaction is very typical if one is not used to a Mac. I know, because I was thinking very similarly less then half a year ago. But than, when you have had some experience with Mac OS, you usually fall in love. Except, if you have to play games or work with some special Windows applications.

P.S. Xcode is not so bad... but anyway, I am not a programmer so I can't compare that IDEs

Jan
06-02-2008, 11:58 AM
"On AMD's tools, I can assume you are talking about GPU PerfStudio?"

So i'm not the only one, who didn't get it to work...

"WoW for Mac is OpenGL, and so is the Unreal Engine on Mac"

There is a Mac port of the Unreal Engine?! Damn, those guys are EVEN better than i thought!

"If you guys want tools for GL, buy a Mac"

Lol, really. You tell me i am supposed to BUY a Mac to get FREE tools???!

"The day I swap to an overpriced machine with an overated OS is the day you can find me on the roof of a local tall building singing "I belive I can fly!"."

Me too.

"they don't even have an IDE which is comparible to Visual Studio"

I can't live without VS. Have to do some Linux stuff right now and i have no influence on what distro / tools are available on the PCs i have to use. It's a nightmare. Basically i program most of the stuff at home using VS and then periodically check, whether it also runs on Linux.

Jan.

LogicalError
06-02-2008, 12:31 PM
It's NOT Microsoft. It's annoying to keep hearing that one come up.
Maybe, maybe not, but prove it!
I believe OpenGL 3 is being help up by Apatosaurus Named Brontosaurus. Prove that it's not the case!

Are you talking about Brontosaurus Jr. or Sr.?

bobvodka
06-02-2008, 01:20 PM
"If you guys want tools for GL, buy a Mac"

Lol, really. You tell me i am supposed to BUY a Mac to get FREE tools???!


Yes, that thought also occured to me after I'd posted, but I was getting dinner at the time so couldn't come back and edit quickly... It did give me lulz however :D

Also, the cheapest Mac is a Mini, coming in at £400. Of course, it has an Intel GMA950 chip in it, so ya know 'lulz' at graphical performance.
Now, if I was willing to drop £1,299 I could get a 15" MacBook pro with a GF8600MGT chip. Better, but still not great and certainly doesn't compare to the GF8800GT512 my desktop has.
£1,386 gets an 24" iMac with a GF8800GS, close but still no cigar.
So, no.. I shant be spending any of that money to get 'free' tools thanks.

knackered
06-02-2008, 03:05 PM
still no news from the khronos group I see.
what are we talking about now? apple mac's is it?
Well, I'm in the market for a new dishwasher. What's the best price I can get, bob?

pudman
06-02-2008, 03:27 PM
Well, I'm in the market for a new dishwasher. What's the best price I can get, bob?

Don't bother. In case you haven't noticed, the dishwasher market has gone silent pending OpenDW3.0. The rumor is that Microsoft is blocking the DWRB.

Dark Photon
06-02-2008, 03:57 PM
Maybe, maybe not, but prove it!

You're the one making the bold, outlandish claim. The burden of proof is on you.
Heh. I'm not claimin' anything. Not rulin' anything out either. Just passin' the time speculating along with everyone else that's out-of-the-loop.

CatDog
06-02-2008, 04:25 PM
still no news from the khronos group I see.
Surprised? The small sign of life sent by Michael Gold (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=238857#Post238857) a few days ago was followed by a huge wave of ranting and criticism.

I dare to say that no Khronos member dares to do this again.

CatDog

Korval
06-02-2008, 06:22 PM
The small sign of life sent by Michael Gold a few days ago was followed by a huge wave of ranting and criticism.

I dare to say that no Khronos member dares to do this again.

If true, then that shows their ineptitude yet again. If your strongest supporters rant and criticize you, that means you're doing something wrong.

If you hit someone every time you walk in the door, they're going to hate you even if you bring them a box of chocolates. You have to repair the lack of trust first.

They need to address the real problem, not just say, "wait until SIGGRAPH." Remember who screwed up here. The ARB needs to make with the big-time apology and explanations. And cowering in fear whenever their screwups promote reasonable criticism isn't the answer.

In any case, I never expected the ARB to jump in and start up a dialog. Michael's message was very clear, "You will learn nothing more of value until SIGGRAPH." He did not come here to actually talk about anything; he just posted a sign.

Mars_999
06-02-2008, 06:22 PM
Typical of people who have no clue to mock what they don't understand. To comment on the price, someone cried about the cost of glDebugger.

Never heard of Ryan Gordon? He has ported the Unreal Engine to Mac, and Linux. I guess you need to get more informed before you comment on it.

"The latest release is the Unreal Engine 3, which is designed around Microsoft's DirectX 9 technology for 32/64-bit Windows and Xbox 360 platforms, DirectX 10 for 32/64-bit Windows Vista, and OpenGL for 32/64-bit Linux, Mac OS X and PlayStation 3"

pudman
06-02-2008, 06:32 PM
Maybe, maybe not, but prove it!

You're the one making the bold, outlandish claim. The burden of proof is on you.
Heh. I'm not claimin' anything. Not rulin' anything out either. Just passin' the time speculating along with everyone else that's out-of-the-loop.


Proving a negative (it's NOT Microsoft) would be quite difficult:
Negative Proof (http://en.wikipedia.org/wiki/Negative_proof)
Inductive Fallability (http://departments.bloomu.edu/philosophy/pages/content/hales/articlepdf/proveanegative.pdf)

I would prefer to wait until someone shows proof that they ARE behind this conspiracy. That should be more readily provable.

pudman
06-02-2008, 06:37 PM
Typical of people who have no clue to mock what they don't understand. To comment on the price, someone cried about the cost of glDebugger.

Never heard of Ryan Gordon? He has ported the Unreal Engine to Mac, and Linux. I guess you need to get more informed before you comment on it.

I am not sure who you're talking to or about what exactly. Please quote references when posting. I might have a comment about the price of glDEBugger (would use it but can't afford it) but I'm not sure what you were going to say about it.

sqrt[-1]
06-02-2008, 07:16 PM
- crap or non-existant OpenGL debugging tools

Now, OpenGL, what is there?
- glIntercept is handy but I disliked having to dig through the created XML files


*cries*

Actually the features I use most are not in PIX - real time shader edit and continue support and pre-post-diff frame buffer support.

It is just a shame that I stopped development of GLIntercept back in 2005 (as the company I worked for stopped using it). It would probably be a lot better if I kept working on it.

(Also don't forget GLSL devil http://www.vis.uni-stuttgart.de/glsldevil/ )

Brolingstanz
06-02-2008, 09:20 PM
In local news... I see they've reserved a room for the Siggraph Open GL BoF. The Wilshire Grand Hotel seems posh enough for the occasion. Say, I wonder if there will be refreshments.

The full story (http://www.siggraph.org/s2008/attendees/birds/).

Korval
06-02-2008, 10:19 PM
It is just a shame that I stopped development of GLIntercept back in 2005 (as the company I worked for stopped using it). It would probably be a lot better if I kept working on it.

Did you open-source it so that others could work on it?


I see they've reserved a room for the Siggraph Open GL BoF.

August 13. Almost a year to the day from the last time we got any substantive info on GL 3.0.

sqrt[-1]
06-02-2008, 11:50 PM
It is just a shame that I stopped development of GLIntercept back in 2005 (as the company I worked for stopped using it). It would probably be a lot better if I kept working on it.
Did you open-source it so that others could work on it?


GLIntercept has always been open-source since the first version (back in 2002?) There have been several third party addons made to it (eg OGLE).

http://glintercept.nutty.org/download.html

Brolingstanz
06-03-2008, 12:31 AM
Almost a year to the day from the last time we got any substantive info on GL 3.0.

Yes, and a grueling year it's been.

magwe
06-03-2008, 01:18 AM
Proving a negative (it's NOT Microsoft) would be quite difficult:


Actually, you just need to prove that it is something else... Personally I believe that there may be a merger with OpenGL ES. The simplified OpenGL architecture probably proved suitable for both use cases and now they have to consult all members working with embedded graphics to make sure it meets their requirements. More people working on a single spec...

bobvodka
06-03-2008, 02:31 AM
and if that's the case why wouldn't they say something? Keeping that secret would help no one and just pisses off and alienates the existing user base for OpenGL.

bobvodka
06-03-2008, 02:56 AM
Typical of people who have no clue to mock what they don't understand. To comment on the price, someone cried about the cost of glDebugger.

I didn't "mock" anything, well beyond your assertion that to get free tools I'd have to pay money; honestly if you can't see the mocking value there you must be blind or something.

But, to the point I stated that a Mac gives me nothing and that I think it is over priced and over rated. Not mocking, pointing out my opinion based on my usage patterns.

Then, the mocking; yes I did complain that gDEBugger cost money to buy, and your solution; spend (the same or more) money to get free stuff.
That is baffling on it's own frankly and looking at the cost of gDEBugger (which I hadn't done in a while) it would set me back around £400, or about the cost of a MacMini which as already pointed out the spec for is "lulz". So, your solution is to pay more than the cost of gDEBugger to get 'free' tools. I dunno, maybe that is some zen genius or something.

On the flip side, D3D already has free tools all over the place, from MS to NV, which are better than their OpenGL counter parts (even the ones you have to pay for) and this is something OpenGL needs if it wants to attract the big name developers.

Now, your point about UE3; yes I'm well aware it has been ported to other platforms, however as you point out it was done by a 3rd party. Also, how many games which use the UE3 engine have made it to linux or the Mac thus far? UT3 hasn't even made it out for them yet and it was released on Windows in November of last year (7 months ago).

CatDog
06-03-2008, 03:00 AM
If true, then that shows their ineptitude yet again. If your strongest supporters rant and criticize you, that means you're doing something wrong.
It means that something is going wrong. That's quiet a difference. You don't know exactly *who* is to blame for that, do you?

I'm on your side with nearly every single word of your breakdown of the situation. But I'm puzzled at the timing. Right *after* a sign of life was given and a fixed date for clarification has been announced, people leave in a huff, crying "but now it's too late".


They need to address the real problem, not just say, "wait until SIGGRAPH." Remember who screwed up here.
I do. But I do also remember the mostly criticised thing during the last months: it was the silence. Now Siggraph has been confirmed for "something" to be published. That's good news, isn't it?

CatDog


*edit* Reading the following post, I must admit that people don't understand Michaels post as breaking the silence. Obviously the point has been reached, where the ARB can do everything - it will be wrong anyway. Because it won't be enough, no matter what.

dor00
06-03-2008, 03:03 AM
Peoples... PEOPLES... 78 pages until now, and NO clues about OpenGL3 yet.

WHY the hell is that silence from peoples who work on it, especially from Khronos????

What the hell is going on?

tanzanite
06-03-2008, 03:04 AM
Proving a negative (it's NOT Microsoft) would be quite difficult:
Actually, you just need to prove that it is something else...No! (problems with logic?). That would prove only that "something else" is true ... not that "NOT Microsoft" is false or true. For that you must also prove that if "something else" then "NOT Microsoft" must also be true ...

back at square one.

bobvodka
06-03-2008, 03:40 AM
[*edit* Reading the following post, I must admit that people don't understand Michaels post as breaking the silence. Obviously the point has been reached, where the ARB can do everything - it will be wrong anyway. Because it won't be enough, no matter what.

Well, no, there are things which could have been said or done, such as telling us what was going to be released at SIGGRAPH. As it is now all we know is 'something' is being released, which is hardly useful and at this point I dare say the majority had already assumed that, post-Easter, we wouldn't hear anything before SIGGRAPH anyways.

Right now it's still all vague and useless; that is what bothers us.
Things that would help;
- whats going to be released?
- why the delay?
- why the silence?
- why should we trust them again?

If the answer to 'what' is just the spec, well I dare say more complaints will follow because a year around from 'nearly done' to 'done' some form of implimentation would have been expected as we are still looking at a lag time of at least 3 months and more likely half a year before anything useful comes out (and then I expect we'll be limited to NV only for another couple of months at least before AMD get a driver out and then god knows when until Intel decide to join in).

magwe
06-03-2008, 04:54 AM
No!

My apologies. I live in a world where appending Microsoft to a long list of reasons seems unreasonable. ;)

ptrptr
06-03-2008, 05:00 AM
So you're going to add the price of the computer to the programs you're running on it? Hey, I NEED a computer with 32 GBs of RAM and the absolutely latest graphics card. Wow, that sure makes the free tools cost a lot, doesn't it? Give me a break.

Fine, you don't like Macs, but what if you already had a Mac? You wouldn't have to pay anything to get the free tools Apple provides! (they are in fact pretty good)
Even if you didn't have a Mac, you would be noticed that Apple in fact supports OpenGL a great deal, which might influence choices in the future. Or are you too set in your ways?

I don't want to spawn a new off-topic discussion, but I wouldn't have expected this from people around here, seeing as Apple is one of the biggest companies actively supporting and depending on ogl.

Ivan Savu
06-03-2008, 05:13 AM
Why isn't this thread sticky, and locked? I went trough 79 pages of angry programmers yelling at Chronos for nothing..

Zengar
06-03-2008, 05:17 AM
@CatDog, this is nonsense! But I don't accept "we will tell you something in two month (maybe)" as an answer that makes me happy! I want results and I want explanations of the delay; we were promised that! So no, I don't see Michael's words like breaking the silence.

CatDog
06-03-2008, 05:39 AM
I really didn't tell you to be happy about whatever. Neither did Michael. "We will tell you something in two month (maybe)" is not the answer and was obviously never meant to be one - it's simply the announcement of answers to be given at Siggraph. There was no *promise* that this will happen, until Michael did it in his latest post, when he actually broke the silence. I wonder what else could be expected at this point.

CatDog

bobvodka
06-03-2008, 05:55 AM
So you're going to add the price of the computer to the programs you're running on it? Hey, I NEED a computer with 32 GBs of RAM and the absolutely latest graphics card. Wow, that sure makes the free tools cost a lot, doesn't it? Give me a break.


Well, yes, of course I'm going to add the cost of acciring the hardware to the cost of the tools for one simple reason; I don't have the hardware.
In order to get the tools I need to buy the hardware; yes?
Therefore it is only logical and correct to include it in the total price of using it.



Fine, you don't like Macs, but what if you already had a Mac? You wouldn't have to pay anything to get the free tools Apple provides! (they are in fact pretty good)


If I alread had a mac this conversation would be moot; I wouldn't have a PC and I wouldn't be complaining and I'd be happy in OSX land.

But, the reality is I DON'T have a Mac, therefore anything on a mac, free or otherwise, is useless to me and free tools are not 'free' if you take into account that I have to buy a mac.



Even if you didn't have a Mac, you would be noticed that Apple in fact supports OpenGL a great deal, which might influence choices in the future. Or are you too set in your ways?


Oh, and it's not that I don't like Macs as such; I don't Apple either. What really got to me was the 'Mac VS PC' adverts which are so full of half truths and dishonest advertising that they are a joke. Hell, the only reason I have an iPod is because I found it, I would never give them money for one.

No, it's not a case of being 'set in my ways' it's a case of, for me, the Mac offers me NOTHING.
It doesn't have the games I want (for example it lacks Dawn of War and I very much doubt it will get Dawn of War 2), it doesn't have probably the best IDE in the world to develop on, it lacks the freedom of the hardware as well (I LIKE building my own machines).

Currently I'm waiting on my next upgrade, and not it won't be a mac for the reasons listed above and also because I have a free copy of Windows Vista Ultimate from MS which I will be installing on it.



I don't want to spawn a new off-topic discussion, but I wouldn't have expected this from people around here, seeing as Apple is one of the biggest companies actively supporting and depending on ogl.

Just because someone uses, supports and requires OpenGL doesn't make them a saint and everything they do blessed or something. Apple are a company, they are out to make money nothing more, the choice of OpenGL was probably a pragmatic one based on what with give them the best income and 'hurt' MS the most.

But, this whole thing started because I said; OpenGL lacks the tools of quality comparable to D3D on Windows, and cry about it all you like the Mac and Linux are not a first target platform; most dev shops are setup with PC running Windows, most target platforms are either Consoles or PC with a port to OSX and maybe Linux following later.

So, a Mac can have all the tools in the world, it won't matter; Windows is the standard, OpenGL needs comparible tools if it even wants to consider 'competing' with D3D in that space.

These are facts.

Lindley
06-03-2008, 06:13 AM
Oh, and it's not that I don't like Macs as such; I don't Apple either. What really got to me was the 'Mac VS PC' adverts which are so full of half truths and dishonest advertising that they are a joke.

That's exactly right. They are a joke. You aren't meant to take them seriously; they just play off stereotypes.

I didn't get it at first either, and was annoyed. Once I figured out what was up, I came to find the things rather funny. John Hodgman rules.

V-man
06-03-2008, 08:06 AM
Wow, this thread just keeps on going.

Brolingstanz
06-03-2008, 08:33 AM
and going and going ... <with the steady din of a pounding drum>

Hey, it's eveyone's chance for their 15 minutes of fame ;)

You never regret the things you do in life, only the things you didn't do, like speaking your mind at a time when it mattered...

The squeaky wheel gets the grease.

pudman
06-03-2008, 09:08 AM
I really didn't tell you to be happy about whatever. Neither did Michael. "We will tell you something in two month (maybe)" is not the answer and was obviously never meant to be one - it's simply the announcement of answers to be given at Siggraph. There was no *promise* that this will happen, until Michael did it in his latest post, when he actually broke the silence.

Broke the silence about what exactly? That news is forthcoming? We all *knew* that we'd eventually get news. But yet we have to wait *another* two months. And we don't even know what "answers" they intend to provide.


I wonder what else could be expected at this point.

Actual news/information?

Korval
06-03-2008, 10:45 AM
Things that would help;
- whats going to be released?
- why the delay?
- why the silence?
- why should we trust them again?

This is a good breakdown of things that would help repair the trust relationship that the ARB has lost over the last 7 years.

knackered
06-03-2008, 11:02 AM
What annoys me is what's so special about the siggraph date? If it's IP problems, why does the date of the siggraph show have any relevance? If you have information, give it to us now rather than putting on some publicity stunt. It's too late for publicity stunts, we want some information *now*. You're not launching a new console, you're releasing a specification well beyond its promised date. There's nothing to celebrate or be smug about.

Jan
06-03-2008, 11:32 AM
1) Maybe the spec is so damned awesome, that everybody who sees it for the first time, falls to his knees, thanks the gods and kisses the feet of the ones who brought it.

And that's just what they want to see in live action with a huge crowd!


2) Maybe they are not quite done yet, but believe that it is mostly done until Siggraph, so they can present it there.

3) They KNOW they will not be done until Siggraph by far, and will present just a bit of information.


Choose what you like.
Jan.

knackered
06-03-2008, 12:14 PM
If it *is* IP problems, they couldn't possibly know they would be resolved by siggraph. So, it's either not IP problems, or they're playing games with us at the expense of OpenGL's future.

CatDog
06-03-2008, 12:28 PM
Things that would help;
- whats going to be released?
- why the delay?
- why the silence?
- why should we trust them again?

This is a good breakdown of things that would help repair the trust relationship that the ARB has lost over the last 7 years.
That's interesting. None of the possible answers to these questions are of any matter to me. As long as they get something usable kicked out of the door. Sure, out of curiosity I'd like to know too. But it really doesn't matter. It won't help me with my programming issues. A well designed and supported API would do.

Trust? Trust is a concept needed in personal relationships. I don't have *any* relationship to Khronos. I wouldn't have a relationship to Microsoft, if I used DX. Would you "trust" Microsoft?

Jan, you forgot one:
4) They officially announce GL3 to be dead or replaced by something completely different.

(Would be a good reason for announcing at Siggraph, since you will see some funny faces then. ;-)

CatDog

Korval
06-03-2008, 01:05 PM
Trust? Trust is a concept needed in personal relationships. I don't have *any* relationship to Khronos. I wouldn't have a relationship to Microsoft, if I used DX. Would you "trust" Microsoft?

The choice of what graphics API to use is a matter of trust. You trust that all of the people involved, from the writers of a specification to the implementers to the OS environment, are working to make sure that the API is stable and does what it says.

Using Direct3D would mean trusting that Microsoft and IHVs were doing their best to make a stable, performant graphics environment. Using OpenGL means trusting that the ARB and IHVs were doing their best to make a stable, performant graphics environment.

Thus far, one of these is living up to its obligation. The other is not. The ARB can toss around specifications till the cows come home, but that doesn't mean it is something that should be used. I wouldn't trust production code to GL 2.1 on Windows, and I wouldn't advise anyone else to do so either.

Anytime you rely on someone else's code for your product, there is a trust relationship.

bobvodka
06-03-2008, 03:11 PM
A well designed and supported API would do.

Trust? Trust is a concept needed in personal relationships. I don't have *any* relationship to Khronos. I wouldn't have a relationship to Microsoft, if I used DX. Would you "trust" Microsoft?


And how can you trust the ARB to support a decent API? To push it forward? Bring in new functionality ASAP?

I trust MS to do the best by DX that they can because it is important to them to have it working on Windows to continue pushing it as a gaming platform. The ARB have no such commitments, even the companies which make up the ARB don't such as NV or AMD. Regardless of what happens they'll sell graphics cards whatever the weather.

You need to take all of this in historical context; for YEARS the ARB was a silent wall, giving little to no information and every year tossing out a spec which the IHV would support in varying ways at verying times.

Then, Pipeline happens, we are told that we'll be told me and for a while it seemed believable; news letters, discussions, asking of our opinion and then, finally, SIGGRAPH 07 rolls around, the day everyone has been expecting and the spec... well, it isn't quite ready but it's 'nearly' there, so we say 'ok, get it done, don't rush but if it's nearly done it shouldn't be long'. Then this thread appears, 'erm, we have trouble, we've made some changes, but it shouldn't be long...' and then, 6 to 7 months after it was 'nearly' done we get told 'SIGGRAPH 08!' a YEAR after it was 'nearly' done.

By contrast, MS shout about their graphics API, they give people status and free stuff for prompting it and they don't say something is 'nearly' done and then take a year to get it out. Look at DX10, maybe some slippage but it was here with Vista. Look at DX9; multiple upgrades and releases. An SDK which gets updated every 2 months.

Look at the difference and tell me; which group do you think you should trust to push things forward in the future? For a little while it looked like the ARB might have been stepping up to be that group, but it looks like it's still MS because they are willing to do that much more it seems.

pudman
06-03-2008, 03:49 PM
That's interesting. None of the possible answers to these questions are of any matter to me.

Not even the "What's going to be released?" Well, if you just want the working GL3 come back in a year. It *should* be nice stable by then and many people on this forum can help you through it.

If that's not what you want, why are you here? This topic is all about complaining and speculation. Nothing here about a "A well designed and supported API" that's for sure.

The first post in this topic is quite amusing, in hindsight. Amusing in a pathetic sort of way.

CatDog
06-03-2008, 04:13 PM
Microsoft is doing the things needed to promote it's own products. So does nVidia, ATI, Intel. And Khronos does this as well, I suppose (well I hope!).

Making the product better is just one of these things. It's more than obvious that there are other things going on behind the scenes. Things that I don't know and probably will never know. It's happening all the time, superior products get doomed by forces that remain under the hood.

How can I build a trust relationship on top of that? It's business! Really nothing trustworthy in my opinion.

If the ARB releases OpenGL3 at Siggraph and vendors agree to support it, I will be happy. And I don't care what forces were involved in its delayed birth (except for curiosity). But hell, I won't trust them. None of them.

bobvodka, it is pretty easy to trust Microsoft to push things forward. They seem to be the most powerful force in the game. So from my point of view you basically trust in their power. Which is possibly a good decision.

pudman, did I offend you in any way? If so, please forgive me. I'm here to express my opinion, just like anybody else. It's a discussion board, isn't it?

CatDog

pudman
06-03-2008, 05:33 PM
pudman, did I offend you in any way?

No, not at all. I realize "why are you here?" sounds quite harsh. I'm just frustrated (or super-curious?) and hearing someone not sharing in that frustration-at-not-hearing-anything is, well, frustrating (exciting!). It gives me reasons to post! Yay!

So join in the criticism please! I just know it's super-productive!


And I don't care what forces were involved in its delayed birth (except for curiosity)

No no no! This is the wrong direction! Don't go towards the light of disinterest...!

Korval
06-03-2008, 06:11 PM
How can I build a trust relationship on top of that?

Exactly. Which is the point.

You can trust that Direct3D will receive timely and significant updates. You can trust that your Direct3D-based application will probably not fail because of Direct3D or the drivers operating underneath it.

You can trust these things because they have been true in the past and they continue to be true in the present. There is a long history of success.

The ARB and OpenGL has nothing more than a long string of abject failure. Nothing about OpenGL has been timely, ever. And stability on Windows is a joke.

In short, you can't trust OpenGL.


But hell, I won't trust them. None of them.

If you use someone else's code, you are trusting your application's functionality to that someone else. Especially if it is closed-source. Whether you want to call it a trust relationship or not, it is.

This trust relationship is very important for getting people interested in using your stuff. If you promise X and deliver, over the course of 5 years, then you're more likely to get repeat business. Likewise, if you promise X and consistently not deliver, then you're less likely to get repeat business.

rashi
06-03-2008, 11:37 PM
I want the source code of OpenGl.how can i get it?

NeARAZ
06-04-2008, 12:12 AM
I want the source code of OpenGl.how can i get it?
On some platforms (Linux), you can get Mesa source code. Even there, some drivers are closed-source. On other platforms (Windows), OpenGL is implemented entirely in the drivers, and there is very little chance of getting source code for NVIDIA/AMD/Intel drivers :) On yet other platforms (OS X), OpenGL is partly implemented by Apple, partly by hardware vendors, and again, very little chance of getting the source code.

In summary: it's either Mesa or no source code.

...but... how this related to OpenGL 3 topic?

CatDog
06-04-2008, 06:48 AM
The ARB and OpenGL has nothing more than a long string of abject failure.
It's a little bit disturbing to hear that from the 2nd top most OpenGL Guru of all times. (Judging by that stats on the right.)

You must be a very masochistic person. ;-) (please note that smile)

If it's not the timing of the ARB, something else seems to be of quality about OpenGL. For the last 15 (?) years.


This trust relationship is very important for getting people interested in using your stuff. If you promise X and deliver, over the course of 5 years, then you're more likely to get repeat business. Likewise, if you promise X and consistently not deliver, then you're less likely to get repeat business.
Shure. But the "promise" you are talking about would be the API itself. What I mean is: using someone elses code is not a matter of trust, it's a matter of having a contract, described by an interface. In this case OpenGL. So *if* we already had GL3, *then* there would be a promise. It's the interface itself that assures you to have certain functionality.

When it comes to complain about disastrous OpenGL implementations, I'm right at your side.

I think you are mixing this up with another promise, that has been given by the ARB. They told us to deliver an API and didn't yet. Well, simply don't trust them and you won't get disappointed that much. You should have known better. Actually you *knew* better, see above.

CatDog

MZ
06-04-2008, 08:04 AM
Exactly. Which is the point.

You can trust that Direct3D (...) You can trust that your Direct3D-based application (...) You can trust these things (...)

I think your devout trust in Direct3D, reverend Korval, won't be disturbed at all if I remind you two little facts:

1. the whole DX-10-Vista-only fraud.

2. the joy of having to implement two separate rendering backends for DX9 and DX10.

But of course, stuff like this never happen again, right? I'm sure you can trust that one too...

Mars_999
06-04-2008, 08:23 AM
You guys are crazy if you think MS is going to allow DX to be on other platforms other than Windows/Xbox. That would give Apple,Sony,Linux, and others one more piece in not wanting to run Windows, and then those companies will be at MS mercy when ever MS decides to change something...

tranders
06-04-2008, 08:32 AM
You can trust that your Direct3D-based application will probably not fail because of Direct3D or the drivers operating underneath it.
I have to disagree with this because we recently reviewed a driver from one of the mainstream vendors and our testing showed significant regressions only in our D3D series of tests while the OpenGL tests were unaffected and even showed signs of improvement. Regardless of your choice of API, you are always at the mercy of the graphics card vendor and their ability to write robust and stable drivers. It is naive to believe otherwise.

However, I am losing faith in OpenGL as a standard for professional graphics applications due to the length of time from the last release of information. This only reinforces our need to invest in other alternatives.

bobvodka
06-04-2008, 08:49 AM
A "vista only fraud"? Please; there are technical and economic reasons why MS decided NOT to invest a significant amount of man power (and thus money) into porting a whole new driver model back to XP. It's not like you could just drop a new dll in and go 'tada!'; Vista's driver arch is significantly different to XPs, where XPs is an outgrowth of 2K which itself was a merging of 9x and NT.

As for the seperate rendering backends, here is a news flash; hardware chances.
While there is a degree of annoyance with having to recode things DX10 hardware ISNT the same as DX9 hardware. The API we had for DX9 had the FFP and worked how DX9 hardware wanted to work; DX10 drops the FFP and allows you to code closer to how D3D10 hardware would like to work.

This is in fact a problem with current "DX10" games, they are often a DX9 renderer ported to DX10 without though about the different ways the APIs work which can lead to poor performance.

bobvodka
06-04-2008, 08:51 AM
You guys are crazy if you think MS is going to allow DX to be on other platforms other than Windows/Xbox. That would give Apple,Sony,Linux, and others one more piece in not wanting to run Windows, and then those companies will be at MS mercy when ever MS decides to change something...

I'm sorry, what?
Point to and quote the person who said that, because if you honestly think anyone thinks such a situation will occur... well, you've been drinking too much crazy juice...

bobvodka
06-04-2008, 08:54 AM
You can trust that your Direct3D-based application will probably not fail because of Direct3D or the drivers operating underneath it.
I have to disagree with this because we recently reviewed a driver from one of the mainstream vendors and our testing showed significant regressions only in our D3D series of tests while the OpenGL tests were unaffected and even showed signs of improvement. Regardless of your choice of API, you are always at the mercy of the graphics card vendor and their ability to write robust and stable drivers. It is naive to believe otherwise.


To be fair, Korval did say 'probably'; no one is claiming that D3D drivers are perfect, mistakes happen, the difference is that the IHV seem to invest more time and money into them which means, in general, you'll get new features and they will be more stable. ATI/AMD are the poster child for this; the D3D drivers are significantly better in many areas and it takes them ages to get newer extensions out of the door (afaik there are still no D3D10 feature extensions present in the drivers, which does somewhat dent the 'you can use OpenGL to get D3D10 features on XP!' arguement people put forward, because.. you cant).

Mars_999
06-04-2008, 09:03 AM
A "vista only fraud"? Please; there are technical and economic reasons why MS decided NOT to invest a significant amount of man power (and thus money) into porting a whole new driver model back to XP. It's not like you could just drop a new dll in and go 'tada!'; Vista's driver arch is significantly different to XPs, where XPs is an outgrowth of 2K which itself was a merging of 9x and NT.

As for the seperate rendering backends, here is a news flash; hardware chances.
While there is a degree of annoyance with having to recode things DX10 hardware ISNT the same as DX9 hardware. The API we had for DX9 had the FFP and worked how DX9 hardware wanted to work; DX10 drops the FFP and allows you to code closer to how D3D10 hardware would like to work.

This is in fact a problem with current "DX10" games, they are often a DX9 renderer ported to DX10 without though about the different ways the APIs work which can lead to poor performance.



What are you talking about, I guess you missed the whole 8-10 BILLION dollars MS said it took to make Vista. If it took 8-10 billion they could easily have made DX10 for XP with that kind of money. Use your head, MS doesn't want DX10 on XP plain and simple, if they wanted to they could. And I don't remember if anyone here said that the other vendors would use DX or not due to the 80+ pages. :) But if anyone here thinks life will be grand for all your OTHER THAN WINDOWS needs guess again without OpenGL you will be stuck with nothing.

bobvodka
06-04-2008, 09:31 AM
Yes, it took them 8 to 10 billion to make Vista, but this doesn't mean they want to spend another X,000,000 (maybe more, if the quick google I found saying on average it was costing $200,000/year for an employer is correct then that only allows for 10 people taking a year and it would take more than that) on porting it back to a 5 year old OS with a significantly different driver model.

So, yes, they don't want DX10 in XP because it wasn't seen as a sound investment.

As for "And I don't remember if anyone here said that the other vendors would use DX or not due to the 80+ pages."; why on earth did you even bring it up then?
No one claimed it, personally I don't care about OSX or Linux right now. Infact, my current plans basically give the finger to anyone not using x64 Vista as it will be x64 and D3D10 only to start with. Not that I plan on releasing any time soon, at which point I don't see it being a problem as Windows 7 will probably be about so two OSes with D3D10+ support and x64 should be even more common.

knackered
06-04-2008, 09:52 AM
releasing what? whatever it is, you're obviously not interested in profit with that strategy.
it will be at least 3 years before you could feasibly afford to release a product as a vista exclusive; and if whatever you're writing is going to take 3 years to develop, I'd seriously recommend investing in developing an XP-compatible version, or it's one hell of a punt.

Leadwerks
06-04-2008, 10:06 AM
Yes, we are all pissed, but what is the alternative? DX9? DX10? Both major APIs have made a huge fumble that would normally spell the death of one if the other wasn't screwing up at the same time. So if there are any serious third-party contenders, now would be their time to shine.

What if NVidia came out with a new API? :D I bet they could make it happen in 6 months.

Zengar
06-04-2008, 10:14 AM
Then ATI won't support it :)

bobvodka
06-04-2008, 10:20 AM
releasing what? whatever it is, you're obviously not interested in profit with that strategy.
it will be at least 3 years before you could feasibly afford to release a product as a vista exclusive; and if whatever you're writing is going to take 3 years to develop, I'd seriously recommend investing in developing an XP-compatible version, or it's one hell of a punt.

You're pretty much correct, I don't.
I work for a living (see aforementioned Eyetoy game), that's where I make my profit. Anything else I do I do for my own enjoyment and yes, if I get my arse in gear it will probably take about 3 years or more before anything beyond 'tech demo' appears. At which point Vista is 3 years old, Windows 7 will be out and DX10 will be well placed imo.

Maybe at the time I will make an XP version, who knows? While I can predict the future the fact is Vista is picking up market share (unfortunately the only source of stats I can find on this comes from the w3schools.com site which shows a basic trend of Vista increase and already out striping Linux and OSX combined) and will continue to do so as time progresses and Windows 7 will only help in that regard.

Korval
06-04-2008, 10:36 AM
It's the interface itself that assures you to have certain functionality.

When it comes to complain about disastrous OpenGL implementations, I'm right at your side.

But we don't use APIs; we use implementations. A specification is just ink on a page. It is the implementations that really matter. And every major OpenGL implementation vendor is on the ARB. And they have failed to provide reasonable, con formant, stable GL implementations. nVidia's is the closest, but even their drivers have quirks.

Mars_999
06-04-2008, 01:17 PM
You're pretty much correct, I don't.
I work for a living (see aforementioned Eyetoy game), that's where I make my profit. Anything else I do I do for my own enjoyment and yes, if I get my arse in gear it will probably take about 3 years or more before anything beyond 'tech demo' appears.

You should probably get going then, and stop wasting your time here ranting about OpenGL and use DX10.

bobvodka
06-04-2008, 01:54 PM
When I lose intrest I will, however my time is mine to spend as I will and, unless you've become a moderator suddenly, you will not tell me what to do with my time.

But, if you listen to nothing more I have to say then listen to this; I've used OpenGL for around 8 years now. I've defended the API. I'm promoted the API. I've helped many others, yourself included Mars_999, on the OpenGL forum at Gamedev.net, via PM and via email, for some time and I've moderated that same forum as well. All in my own time. All because I liked using the API.

Consider all that. Consider the last 8 years.

Now consider how much has gone wrong to have someone who was dedicated to the API for use to be turning away from it.

True, an API is just a tool, but you don't sink that much time and effort into something if you don't have some enjoyment from doing so and some belief that it is the right way to go.

I guess, in the end, I'm a pragmatist. I want something which is clean, functional and here. Right now that's D3D10.

pudman
06-04-2008, 03:05 PM
unless you've become a moderator suddenly, you will not tell me what to do with my time.

Moderators have the power to tell people what to do with their time? That is some power!

I think the summary of the last bunch of posts could be:
1) I want the features of D3D10 in OpenGL with a working implementation
2) I want to run at least on XP and Vista with the same implementation (of those D3D10 features)
3) Without GL3, now is great opportunity for another API to come in and prosper.
4) I still love you OpenGL but I'm not in love with you any more.


What if NVidia came out with a new API? \:D I bet they could make it happen in 6 months.

If it was supported on all platforms I'm sure people (at least hobbyists) would readily use it. Look at CUDA. And it would be neat way of showing how fast the hardware could be. But it doesn't make any financial sense to do so. At least with CUDA High Performance Computing people will readily buy your hardware because that's what they do already: buy hardware to to suit their problem domain. That is, however, not the case with consumer level graphics.

bobvodka
06-04-2008, 03:15 PM
It's not about D3D10 features, it's about a new and clean API.
And as previously mentioned, based on the teasers we've had the GL3.0 API is functionally very close to D3D10.

pudman
06-04-2008, 04:50 PM
It's not about D3D10 features, it's about a new and clean API.

I would argue that it's both. While GL3 doesn't require D3D10 hardware it will most likely move some extension related to that hardware into the core feature set. At least, that has been what has occurred in the past with each new release of GL. That would mean that ATI couldn't continue to hold out on D3D10 features in their GL driver... or at least they would have an easier time implementing it.

A clean API would be nice/preferable. An API whose implementation included D3D10 features consistently across hardware (I'm looking at you ATI!) would be even more preferable.


And as previously mentioned, based on the teasers we've had the GL3.0 API is functionally very close to D3D10.

I'm losing faith in those teasers. To say GL3 is functionally anything is laughable until it does, indeed, function.

bobvodka
06-04-2008, 05:17 PM
D3D10 level features were slated for inclusions in a later update, Mt. Evans and I hope it still works that way; although I would also hope given the delay that Longs Peak Reloaded is simply folded into the GL3.0 spec as a delay for the minor upgrades it would bring would be daft and cause further divides.

The reason for the clear D3D9/D3D10 feature split is simple; while I might be happy to forgo D3D9 class hardware support for many people this isn't an option, yet a clean API would benifit them as well.

So, to have GL3.0 target D3D9 class hardware and have GL3.1 target D3D10 features makes sense. A clean split with a common enough API that adding extra features would be a smooth transition.

Also, to clarify what I meant; by 'functionally very close' I meant that the API teasers show a method of interacting with the hardware which is very much the same as D3D10.

Indeed, it would appear that some D3D10 hardware like features are being layed over the top of D3D9 hardware to allow for easier expansion into the future (thinks like constant buffers which are shared between shaders for example, enabling you to bind a uniform block once and have it reused across shaders).

Korval
06-04-2008, 06:27 PM
D3D10 level features were slated for inclusions in a later update, Mt. Evans and I hope it still works that way; although I would also hope given the delay that Longs Peak Reloaded is simply folded into the GL3.0 spec as a delay for the minor upgrades it would bring would be daft and cause further divides.

At this point, there's no reason for Mt. Evans as a separate thing.. What we should get are levels of GL.

GL 3.0 would be Longs Peak and LP Reloaded, which only exposes GL 2.1/DX9 features (with some parts of DX10). GL 3.1 (detectable by a query, much like regular GL version numbers) would contain geometry shaders and the bulk of DX10. I would also suggest that there would be a GL 3.2 that would have everything in DX10.1 (that isn't just DX10 with bigger limits, of course).

That is, they would all be released as one specification. None of this, "You get GL 3.0 now and in 6 months [read: 2 years], you get 3.1," stuff.

Dark Photon
06-04-2008, 07:42 PM
I went trough 79 pages of angry programmers yelling at Chronos for nothing..
...and all I got for it was this stinkin' T-shirt :p

Dark Photon
06-04-2008, 07:47 PM
The squeaky wheel gets the grease.
Yep. If nothing else, this serves to convince whoever/whatever's causing this that many OpenGL folk aren't easily bucked, and affirm to Khronos and the ARB that plenty of folks are "still" waiting and very interested in seeing real results...for now.

If they don't plan to, well, then I heartily encourage them to say so ASAP, so NVidia/ATI/Intel can pick up the cross and own the next-gen cross-platform graphics APIs. DX, by design, sure ain't it. At work we dev large (20-100 pipe) rendering systems on Linux/OpenGL, so how this issue is resolved is particularly important to us. Hopefully, that'll remain OpenGL. But if GL stalls and the vendors give us a better option, well... that's business.

Mars_999
06-04-2008, 08:28 PM
I am not trying to attack you personally Bobvodka, I am just sick of all the people being negative towards the GL3.0 API, and wanting to leave. Because if everyone is leaving then lets get it over with so I can learn DX10 and be stuck with windows. The problem is I don't want to be limited to just Windows. So no hard feelings on my end. ;)

pudman
06-04-2008, 10:46 PM
I am just sick of all the people being negative towards the GL3.0 API

What API? Oh yeah, right, it doesn't exist yet...
I think people are being negative at the birth process of GL3, Kronos/ARB in particular. Everyone was excited about what they told us GL3 would consist of. (excitement = positive)

People moan about leaving GL for DX not because GL3 will suck but because IT'S NOT HERE YET.


Also, to clarify what I meant; by 'functionally very close' I meant that the API teasers show a method of interacting with the hardware which is very much the same as D3D10.

I know, but then my joke wouldn't have worked.


At this point, there's no reason for Mt. Evans as a separate thing.. What we should get are levels of GL.

I agree with this. Putting out simply a pre-D3D10-hardware-compatible API makes GL once again seem dated. If they are going to do it, do it now and get it over with. Maybe this is what is causing the delay?

dor00
06-04-2008, 11:43 PM
I wonder if peoples who "prepare" OpenGL3 read those threats.

Still not understand the silence.

knackered
06-05-2008, 12:01 AM
If I hear "The squeaky wheel gets the grease" one more time...
It's been used in every moan about every bit of delayed GL functionality since the birth of christ.....

LogicalError
06-05-2008, 01:15 AM
It's been used in every moan about every bit of delayed GL functionality since the birth of christ.....


Wow, didn't know opengl was *that* old...

ebray99
06-05-2008, 03:06 AM
It's been used in every moan about every bit of delayed GL functionality since the birth of christ.....


Wow, didn't know opengl was *that* old...

Oh, for sure it is. Matter of fact, it was OpenGL who killed said Christ! See what you've done Kronos?!?! YOU'VE KILLED JESUS!!!one!!eleventy!!!

knackered
06-05-2008, 03:46 AM
...and we're all waiting for the second coming.

Jan
06-05-2008, 04:04 AM
But all Star Trek tapes will be destroyed during the second coming of Jesus!

Jan
06-05-2008, 04:07 AM
http://www.usingenglish.com/reference/idioms/squeaky+wheel+gets+the+grease.html

For all those that are not native-english speakers.

bobvodka
06-05-2008, 04:25 AM
But all Star Trek tapes will be destroyed during the second coming of Jesus!

All Power To The Engines.

bertgp
06-05-2008, 07:17 AM
If they don't plan to, well, then I heartily encourage them to say so ASAP, so NVidia/ATI/Intel can pick up the cross and own the next-gen cross-platform graphics APIs.

NVidia/ATI/Intel are all part of the group working of the OpenGL 3.0 spec (see http://www.khronos.org/members/promoters ) They already know what is going on.

MZ
06-05-2008, 08:30 AM
A "vista only fraud"? Please; there are technical and economic reasons why MS decided NOT to invest a significant amount of man power (and thus money) into porting a whole new driver model back to XP. It's not like you could just drop a new dll in and go 'tada!'; Vista's driver arch is significantly different to XPs, where XPs is an outgrowth of 2K which itself was a merging of 9x and NT. Do you really believe that the ability to draw SM 4.0 shaded triangles has anything to do with Vista architecture?


As for the seperate rendering backends, here is a news flash; hardware chances. (...)
You missed the point totally.

In OpenGL 2.x (and by this I mean nVidia implementation) you can write renderer for both DX9 and DX10 class hardware.

In OpenGL 3.x, if it existed, you could do the same. The only difference would be newer, cleaner, faster interface.

In Direct3D, for some mysterious reasons, this is not feasible.

Do you think you can rationalize this screw up by arguing how revolutionary DX10 is? Guess what, you could drop T&L from DX9 too.

MZ
06-05-2008, 08:34 AM
Yes, it took them 8 to 10 billion to make Vista, but this doesn't mean they want to spend another X,000,000 (maybe more, if the quick google I found saying on average it was costing $200,000/year for an employer is correct then that only allows for 10 people taking a year and it would take more than that) on porting it back to a 5 year old OS with a significantly different driver model.

So, yes, they don't want DX10 in XP because it wasn't seen as a sound investment.
By The Holy Power of John Carmack, I dismiss your hypothesis:


"They're artificially doing that by tying DX10 so close it, which is really nothing about the OS. It's a hardware-interface spec. It's an artificial thing that they're doing there. They're really grasping at straws for reasons to upgrade the operating system"

John Carmack.
source (http://www.gamesindustry.biz/articles/carmack-questions-games-for-vista-initiative)

V-man
06-05-2008, 08:47 AM
Originally posted by MZ:
Do you really believe that the ability to draw SM 4.0 shaded triangles has anything to do with Vista architecture?

Bobvodka is confusing API and driver model. An API is just an interface and a typically API whether you are talking about Win32, libpng, OpenAL ... is always portable to any OS.

The DX10 API is also portable to any OS and it is possible to make it available for WinXP. You have to understand that if they made it available for WinXP, there would be less motivation to buy Vista and MS could face huge losses.

bobvodka
06-05-2008, 08:50 AM
Do you really believe that the ability to draw SM 4.0 shaded triangles has anything to do with Vista architecture?


SM4.0 probably could be back ported, but SM4.0 isn't all that D3D10 has, if you think so you are ignorant quite frankly.


In OpenGL 2.x (and by this I mean nVidia implementation) you can write renderer for both DX9 and DX10 class hardware.

In OpenGL 3.x, if it existed, you could do the same. The only difference would be newer, cleaner, faster interface.

In Direct3D, for some mysterious reasons, this is not feasible.

Do you think you can rationalize this screw up by arguing how revolutionary DX10 is? Guess what, you could drop T&L from DX9 too.

So, you think D3D10 is just 'no FFP' and 'SM4.0'?

You know what, I can't be bothered any more, I could sit here and detail the differences but you've already made up your mind, hell most people have so it's a waste of time.

I'm out, have fun waiting, I'm off to do things with an API which already exists.

CatDog
06-05-2008, 08:59 AM
I'm out, have fun waiting
Bummer! That would have been of interest to me. What exactly makes DX10 dedicated to Vista only?

CatDog

knackered
06-05-2008, 09:25 AM
the fact that he couldn't give one single example speaks volumes. The truth is, it's the other way round - vista needs a 3D hardware abstraction layer, D3d10 doesn't need vista.


I'm out, have fun waiting, I'm off to do things with an API which already exists.
to be honest, I'd be right with you, but unfortunately there's gaping holes in d3d functionality that means I couldn't ever migrate to it. It doesn't support quad buffered stereo or gen-locking. It's comforting to know that considering nvidia have a huge amount invested in professional cards that support that functionality, nvidia can't just abandon GL either. So here I wait....

Korval
06-05-2008, 10:44 AM
You have to understand that if they made it available for WinXP, there would be less motivation to buy Vista and MS could face huge losses.

Really? You honestly think that DX10 is a significant motivation for most people?


the fact that he couldn't give one single example speaks volumes.

Well, there is (at least) one thing. The new graphics driver model (which is not portable to XP) allows the D3D10 API to make certain guarantees that the D3D9 API could not make. Specifically, the sanctity of video memory is now guaranteed. That is, you don't have to check for Alt-Tab-ing and then reload all your stuff. Video memory is properly virtualized by the OS and graphics driver.

You could not port that back to XP because its graphics driver model does not and cannot allow for this.

Now that being said, the problem is really that D3D was too low-level in not providing a software guarantee (the way GL does). That left them open to this kind of divide. But the problem was already there, and they did what they could with it.

Jan
06-05-2008, 11:11 AM
Well, there are certainly things that D3D10 uses in Vista, that are not directly portable to XP, like the virtualization of video-memory. However, AFAIK "managed" resources (which are available under D3D9 already) are actually, what we are used to (driver holds a copy in RAM and swaps it back and forth, if necessary). This functionality was extended in D3D10 a bit. Although the way D3D10 does it under Vista might not be portable to XP, the function by itself should be implementable, even on XP. It just needs to be implemented differently, maybe not as elegant and optimal as under Vista, but possible non the less.

And even if there are issues, that prevent it from being fully implemented on XP (the virtualization i mean), you could still expose the complete D3D10 API under XP, just with a few limitations (like invalidating render-targets on a task-switch).

And, YES, i am sure MS restricted D3D10 to Vista as an argument to switch to it. The only problem is, that this great new API seems not to be such a big argument for gamers, as MS hoped it to be. In general D3D10 is also just an API clean-up. It is not like D3D8 compared to D3D9.

Even if D3D10 is not a great argument for switching to Vista, it is at least ONE argument. Together with a few other features, people DO HAVE a few reasons to switch to it. Now, if D3D10 would be available on XP, MS would reduce the available arguments for switching, which cannot be their intention. I do understand this decision, from a political and PR point of view. Technically there is no reason.

Jan.

knackered
06-05-2008, 11:30 AM
Video memory is properly virtualized by the OS and graphics driver.
I thought that was basically all resources are created in managed mode, which is GL's default/only mode even on XP. Not a show stopper.
Incidentally, I love this quote from one of microsofts d3d10 marketing blurbs.

There is no limit on the number of objects which can be rendered, provided enough resources are available.
There is no limit to the amount of money I can spend, provided I have enough money.

MZ
06-06-2008, 06:27 AM
Well, there is (at least) one thing. The new graphics driver model (which is not portable to XP) allows the D3D10 API to make certain guarantees that the D3D9 API could not make. Specifically, the sanctity of video memory is now guaranteed. That is, you don't have to check for Alt-Tab-ing and then reload all your stuff. Video memory is properly virtualized by the OS and graphics driver.

Let's see. Users of XP are effectively denied such functionality like:

geometry shaders, more powerful vertex and pixel shaders, texture arrays, constant buffers, conditional rendering, new way of instancing, new texture formats, new framebuffer formats, etc.

And you're telling me it's all because of... Alt-Tab-ing?

Yeah, makes perfect sense. Be damned, you evil key combination!


You could not port that back to XP because its graphics driver model does not and cannot allow for this.

Take DX-to-GL wrapper and your "impossible" problem is solved. All DEVICE_LOST is magically gone. Obviously, Microsoft has all resources needed to do it better, without the middleman.

Nicolai de Haan Brøgger
06-06-2008, 09:59 AM
Let's see. Users of XP are effectively denied such functionality like:

geometry shaders, more powerful vertex and pixel shaders, texture arrays, constant buffers, conditional rendering, new way of instancing, new texture formats, new framebuffer formats, etc.

Where do you have this untrue infomation from?



Video memory is properly virtualized by the OS and graphics driver.

Well, it is not properly virtualized in the sense that memory limitations in the driver and on the GPU, are transparent to the developer. I would not call it properly virtualized until we can render with textures that exceed the GPU memory.

Demirug
06-06-2008, 10:10 AM
You overrate the resources that Microsoft has available. ;)

I believe the main problem here is that most people see Direct3D 10 as an API while for Microsoft it is something more. There is the API, the user mode runtime, the kernel mode graphics system, the two driver interfaces (user and kernel mode). The API itself is not bound to any OS but the other parts are tightly integrated in the Vista infrastructure.

Therefore bringing Direct3D 10 to Windows XP requires developing new DDIs (device driver interfaces) or extend the current one and a runtime that can handle them. While this is quite possible it doesn’t make sense from a commercial point of view doing this for a product that reaches it end of selling period. Additional the team was quite busy anyway as they had to reworks all direct draw and 3d runtimes to work with the new Vista driver model.

If someone really wanted Direct3D 10 on XP it is quite possible (it is possible on Linux, MacOS X, … too). It is only a question of the amount of time someone is willing to invest.

Btw: Direct3D 11 seems just around the corner. The agenda for the next gamefest promise a lowdown.

Demirug
06-06-2008, 10:15 AM
Video memory is properly virtualized by the OS and graphics driver.

Well, it is not properly virtualized in the sense that memory limitations in the driver and on the GPU, are transparent to the developer. I would not call it properly virtualized until we can render with textures that exceed the GPU memory.

Direct3D 10 allows this. The Vista video memory manager will swap your render targets even to disk if necessary. You are only limited by the address space your application can use.

knackered
06-06-2008, 10:51 AM
in the same way that on xp the GL driver will swap your resources to system memory and the OS (any OS) will swap them to disk just like any other block of system memory.

Overmind
06-07-2008, 04:14 AM
You could not port that back to XP because its graphics driver model does not and cannot allow for this.

So you're saying GL does not and can not possibly work on XP. Yup, that makes perfect sense.

Jan
06-07-2008, 07:32 AM
Let's see. Users of XP are effectively denied such functionality like:

geometry shaders, more powerful vertex and pixel shaders, texture arrays, constant buffers, conditional rendering, new way of instancing, new texture formats, new framebuffer formats, etc.

Where do you have this untrue infomation from?


If you say this is untrue, you should elaborate a bit more in what way this information is untrue! It is a fact, that D3D10 does extend all these features, so how exactly do you mean your statement?

ector
06-07-2008, 11:32 AM
in the same way that on xp the GL driver will swap your resources to system memory and the OS (any OS) will swap them to disk just like any other block of system memory.

Yeah. The difference is that to support this, OpenGL keeps copies of ALL your textures in system RAM, so it can automatically restore them when their video RAM is lost. Vista, on the other hand, has some sort of protection for video memory contents so that this extra copy is no longer needed, and your texture is still guaranteed to survive.

The reason why you lose textures in D3D9 and earlier (on XP, on vista you can use D3D9Ex to get D3D10-like behaviour) is that for efficiency reasons, keeping a system memory copy of every texture, like OpenGL always does, is wasteful, and the responsibility for keeping a copy of the textures is passed to the app developer, who may decide that reloading from disk is better than wasting RAM.

Nicolai de Haan Brøgger
06-07-2008, 11:53 AM
If you say this is untrue, you should elaborate a bit more in what way this information is untrue! It is a fact, that D3D10 does extend all these features, so how exactly do you mean your statement?

What is the point? :)

I think you and I, both know that most of the DX10 features are exposed in GL via extensions.

Zengar
06-07-2008, 12:40 PM
Well, I think what most people that say DX10 is not portable to XP mean is that DX10 is more then API but also an implementation (this two things were never separated), therefore, there is more to DX10 then new features. The way DX10 is organized, it presupposes a specialdriver/OS architecture. Anyway, the API alone is implementable in XP (as we can just write a DX10 wrapper --- well, almost --- that will implement it via GL). Anyway, MS desided not to do so, and I am sure, that was done to boost Vista sales (didn't seem to help anyway).

Jan
06-08-2008, 03:11 AM
"I think you and I, both know that most of the DX10 features are exposed in GL via extensions."

Yes, but the original poster wasn't talking about OpenGL! His statement was, that D3D developers are kept from using D3D10 features on XP. Don't say others would state untrue information, when you didn't follow their argumentation completely. We all know, OpenGL can do all those things, but that was not the point to be discussed.

Jan.

Mars_999
06-08-2008, 04:43 PM
http://www.opengl.org/events/details/siggraph_2008_los_angeles_california/

LogicalError
06-08-2008, 10:40 PM
OpenGL 3 "Updates"? Does that mean it won't be done yet by that time?
Heh.. if that happens they should expect an exodus of windows opengl developers to directx :(

dor00
06-09-2008, 12:00 AM
"Don’t miss the great updates on OpenGL3 at the SIGGRAPH BOF! "

Already UPDATES?

dor00
06-09-2008, 12:02 AM
OpenGL 3 "Updates"? Does that mean it won't be done yet by that time?
Heh.. if that happens they should expect an exodus of windows opengl developers to directx :(

Have fun to do that exodus on Mac and Linux:)

niko
06-09-2008, 12:03 AM
I'm afraid there will be an exodus to some extent no matter what :sorrow:

Roderic (Ingenu)
06-09-2008, 12:51 AM
I'm afraid there will be an exodus to some extent no matter what :sorrow:
Talking about iD Software there maybe ?
w/o iD there's little future for OpenGL as a gaming API on Windows IMO.

niko
06-09-2008, 02:06 AM
Well I was talking about OpenGL developers in general but yes, I certainly agree that iD signing off would be Bad.

Brolingstanz
06-09-2008, 03:50 AM
It would only be bad to the extent that there would no longer be that particular beacon of reassurance for the less informed among us. Some folks can make up their own minds, without the need for any precedent at all. Heck that's what Id did with GL to begin with, right? An API doesn't make software great, but great software can make an API famous.

Eddy Luten
06-09-2008, 06:01 AM
Anyone from here actually going to SIGGRAPH this year?

Edit:

Have fun to do that exodus on Mac and Linux:)With the newer technologies available such as NVIDIA's CUDA and AMD's CTM (Close to Metal), I don't see how the next generation (think next 10 years) of graphics programmers couldn't create their own cross platform open source APIs and simply use OpenGL or some other library for outputting the final composite buffer; if that's even necessary at that point. Who cares about "shader models" or "C-like shading languages" at that point when the graphics pipeline is at your every wish?

ACM Queue had a great article about this last month (may be two months ago) which went into detail about every stage of the programmable graphics pipeline and the efforts in making it programmable. Not only vertex/fragment/geometry, but the whole 9 yards.

Anyway, I've pretty much stopped caring about OpenGL's development during this last month but I'm still using it because it's my only choice at the moment, and I don't see D3D as a viable alternative. Yet, when a better API comes along (I don't doubt this), I'll jump onto it since I think developers' lives are being made explicitly difficult at this point in time.

Brolingstanz
06-09-2008, 06:12 AM
I'll certainly be there in spirit, if not in body.

Eddy Luten
06-09-2008, 06:53 AM
Link to the issue of Queue I was referring to: http://www.acmqueue.org/modules.php?name=Content&pa=list_pages_issues&issue_id=48

Rick Yorgason
06-09-2008, 06:55 AM
It would only be bad to the extent that there would no longer be that particular beacon of reassurance for the less informed among us.
If vendors had decreased incentive to support the API, there would be worse consequences than that.

Don't underestimate the value of using a famous API.

Brolingstanz
06-09-2008, 07:20 AM
Right, it comes full circle... a Catch-22 of sorts. The famous APIs tend to squeak, and, as we all know, the squeaky APIs get the grease. But which comes first, the squeak or the grease?

By the calculus of famous APIs, we should be dripping with grease, unless it's my calculations that are beginning to squeak ;-)...

CatDog
06-09-2008, 07:37 AM
unless it's my calculations that are beginning to squeak
Err, yes. :-)

Make shure you attach a webcam to your body, 'cause in case your spirit does not forget it when shifting to Siggraph, we'd all like to see those funny faces when the future of OpenGL is disclosed.

CatDog

Brolingstanz
06-09-2008, 08:08 AM
No one will be more pleased than I when news of OpenGL 3.0 reaches the masses.

PaladinOfKaos
06-09-2008, 09:43 AM
The famous APIs tend to squeak...

OpenGL is beyond squeaking. It's making horrible grinding noises, and there are metal shavings flying everywhere.

Seth Hoffert
06-09-2008, 02:41 PM
With the newer technologies available such as NVIDIA's CUDA and AMD's CTM (Close to Metal), I don't see how the next generation (think next 10 years) of graphics programmers couldn't create their own cross platform open source APIs and simply use OpenGL or some other library for outputting the final composite buffer; if that's even necessary at that point. Who cares about "shader models" or "C-like shading languages" at that point when the graphics pipeline is at your every wish?

"Shader models" are merely a way of expressing hardware capability; this necessity would remain unchanged even if we were to see CUDA-only pipelines (i.e., shared memory size, stream processor count, texture unit count, etc.) We would see "CUDA model x" or something similar in its place.

Also, correct me if I'm wrong, but wouldn't CUDA only work for deferred rendering schemes?

Zengar
06-09-2008, 02:48 PM
Why should CUDA only work for defferred rendering?

Seth Hoffert
06-09-2008, 02:51 PM
Well I was thinking since it currently cannot output to the visible buffers directly, it'd be a pain to use it for forward rendering... but I suppose the only additional step needed is displaying the color buffer at the end.

-NiCo-
06-09-2008, 02:58 PM
Oh, I thought you were referring to the fact that CUDA lacks a rasterizer...

Zengar
06-09-2008, 03:50 PM
AFAIK, G8x has no rasterizer at all, everything is done via shader units (but I may be mistaken here...)

Seth Hoffert
06-09-2008, 03:56 PM
This is pretty unrelated, but interesting nonetheless. I was reading a forum where some developers were trying to implement stream compaction with CUDA (similar to what the geometry shader can do: remove items and compact its input), but they were unable to match the speed of GS + transform feedback... either some very smart algorithms are in place, or there is special hardware in there to assist with stream compaction (since it was KNOWN that the hardware needed to do this)...

-NiCo-
06-09-2008, 04:03 PM
I wouldn't be surprised if it didn't match the speed of GS + transform feedback. Fast CUDA algorithms heavily rely on the efficient use of the available shared memory, global memory coalescing and avoiding bank conflicts e.g. not making use of memory coalescing can easily cause a tenfold drop in performance. Furthermore, there's probably something more going on at the hardware level. For a matrix-matrix multiplication, I did not manage to get the same speed using CUDA compared to the speed of their CUBLAS library, although they provided the CUDA algorithm themselves in the SDK and manual.

Seth Hoffert
06-09-2008, 04:05 PM
That's true, I suppose a lot of work is required in hand-optimizing CUDA assembly.

Zengar
06-09-2008, 04:28 PM
Yeah, you are right... The architecture is pretty sophisticated and hard to use optimally. This is a hindering point, of course.

Korval
06-09-2008, 05:58 PM
AFAIK, G8x has no rasterizer at all, everything is done via shader units (but I may be mistaken here...)

No, it has a rasterizer. The rasterizer feeds the shader units.


they were unable to match the speed of GS + transform feedback


I did not manage to get the same speed using CUDA compared to the speed of their CUBLAS library

Further evidence as to why IHVs with intimate hardware knowledge need to be responsible for implementing graphics APIs.

knackered
06-10-2008, 02:24 PM
Wow, back on topic. Nicely done korval.

cass
06-11-2008, 06:09 AM
It's a tricky balance, I think. The other approach is to provide intimate hardware knowledge to software developers.

Traditionally CPU vendors have had to take this route while GPU vendors have not, but I think GPGPU is changing that some.

Abstractions are great when they hide you from hardware details you don't need to know, but they stink when the abstraction is not a good match for your problem.

knackered
06-11-2008, 01:27 PM
so you're following the thread, cass?
you don't feel the urge to allay our concerns in any way?

Timothy Farrar
06-11-2008, 05:16 PM
I'm afraid there will be an exodus to some extent no matter what :sorrow:
Talking about iD Software there maybe ?
w/o iD there's little future for OpenGL as a gaming API on Windows IMO.


What makes you think id is using GL still on windows? You know they have a 360 port, which means they have a DX path in the source. So makes sense to use the DX path instead of the GL path on Windows because of better driver support from AMD...

cass
06-11-2008, 06:32 PM
I work at id, and we use OpenGL on Windows, but the API-specific part of the renderer is pretty easily encapsulated to support consoles in their native APIs.

cass
06-11-2008, 06:42 PM
so you're following the thread, cass?
you don't feel the urge to allay our concerns in any way?

Which concerns did you want allayed? This is a long thread, and I only peeked at the bottom. ;)

Mars_999
06-11-2008, 07:50 PM
Nice to know that cass works at Id now. :) But yes John stated already that id tech5 will be GL for windows, PS3, OSX, and DX for x360. Come on people pay attention. Long live GL!!

John said he would think about using GL 3.0 for windows if they had it finished by the time they released the next engine... as far as I am concerned GL2.1+ Nvidia extensions for DX10 are the cat's a**! :)

cass
06-11-2008, 09:52 PM
My preference at this point would be to see the major glaring problems in OpenGL efficiency addressed through limited scope EXT or vendor extensions via collaboration among a small number of IHVs and ISVs. Let the ARB standardize what's been proven.

If we insist on developing a gold-plated, ARB standardized, grand unified silver bullet solution for OpenGL, it'll take forever and it won't be as good as if we had just iterated with steady improvement. I guess I'm just a bottom-up design kind of guy. ;)

The top two things on my list are dramatically reducing egregious driver overhead and enabling fast, fully out-of-band, scattered writes to texture memory for virtual texturing.

Further out, I think we need the graphics abstraction to be able to handle more of its own scene traversal, hierarchical culling, etc. The right abstraction may even include exposing many independent GPU cores. These are the API problems that I think will matter most in say the next 5 years.

Roderic (Ingenu)
06-12-2008, 12:35 AM
OpenGL ES is a nice clean-up of OpenGL, and D3D10 is also a nice clean-up of D3D, I don't really care where OpenGL goes, as long as it moves to get us something better than what we already have.

Simple is beautiful. I don't want ten ways to do the same thing because I end up wondering which is the best one, and with some bad luck the "best one" will be IHV dependent...

I'll spare you the rent about the lack of communication of Khronos regarding OpenGL 3.0, but the least they can do is keep their clients informed about what's going on. (No matter what is happening, clients are much more forgiving when they know what's going on.)

knackered
06-12-2008, 03:20 AM
Which concerns did you want allayed? This is a long thread, and I only peeked at the bottom. ;)
is the gl3 spec going to be ready by siggraph'08, and will there be drivers supporting it by then?

EvilOne
06-12-2008, 06:04 AM
Damn... this is so frustrating. Getting no informations in an informational world is so unusual.

My question is, what do we get at the end? Is the upcoming GL3 version Long Peaks or Mt. Evans or something completely new? As I understand from the few pipeline newsletters, Long Peaks targets DX9 class hardware... Is this correct? As I am currently programming against DX9 hardware (DX10 class hardware is just not common enough currently). So if we get Long Peaks, it is essentially a cleanup of the existing API, right? Which means, hitting the fast rendering path without that much guesswork (so I hope). Maybe some improvements on buffer objects (D3D locking semantics come to mind) and render to texture capabilities going into the core? So what we could expect is something that at least is on par with DX9? Hopefully there is some query mechanism about what is supported and what not (render target formats, filtering caps, post shader blending, etc.) - the current try-and-error resource creation method just plainly sucks. And hopefully, no software fallbacks... from my point of view, anything that would fallback to software should just report an error, keep that [censored] out of the driver...

What makes me currently a bit shiver is this statement from a pipeline newsletter: "While there will still be backwards API compatibility, the new "Lean and Mean" profile, and a substantial refactoring in terms of the new object model, make it in many ways an entirely new API design."

This somehow sounds like we get the same old ATi drivers with some fancy new object api... implemented on top of their old crap.

I'm also interested what the implications for GLSL are. The current state of GLSL is: it is just a [censored] pain in the ass. I'm happily using ARB vp/fp now after lot's of frustrations with GLSL. Hopefully the shading language will be cleaned up too.

Maybe someone with a bit deeper insight could enlighten me. Thanks in advance.

cass
06-12-2008, 06:44 AM
Unfortunately, I don't think anybody that knows what's going on is free to discuss it outside Khronos.

I realize this is frustrating and that information was flowing until it abruptly ceased without explanation.

There's no reason to be optimistic that this bodes well.

As I stated above, my intended plan is to work with individual IHVs (and other ISVs) on EXT or vendor extensions, because implementors can make reliable promises about whether or when they can get me an implementation, and they don't need committee approval. Let the ARB standardize that stuff later, but that won't keep me from making forward progress.

The path I advocate will produce some "multiple ways to do stuff". That problem (if it is really a problem) can be solved by removing old extensions once the functionality has made core or ARB status.

Timothy Farrar
06-12-2008, 07:51 AM
My preference at this point would be to see the major glaring problems in OpenGL efficiency addressed through limited scope EXT or vendor extensions via collaboration among a small number of IHVs and ISVs. Let the ARB standardize what's been proven.

...

The top two things on my list are dramatically reducing egregious driver overhead and enabling fast, fully out-of-band, scattered writes to texture memory for virtual texturing.


Great news, thanks for correcting my wrong assumption about id using dx on pc!

Seems as if for the most part NVidia is keeping up the vendor extensions (NV_conditional_render, etc). Now if only AMD would do its part!

Also sure would be great if we got vendor extensions or GL3 functionality which enabled DMA mapping GPU texture memory directly (along with swizzle/linear + format information) so we could write from CPU asynchronously with rendering (with no GL overhead). I'm assuming this is what you are looking for as well.

knackered
06-12-2008, 10:31 AM
I realize this is frustrating and that information was flowing until it abruptly ceased without explanation.
There's no reason to be optimistic that this bodes well.

I don't think the fact that you've left nvidia bodes well for OpenGL either. Have all the GL dev-relations people been streamlined away?

cass
06-12-2008, 11:01 AM
I dunno, working at id, I can now be a non-partisan OpenGL supporter.

There are still a *lot* of people NVIDIA that care about OpenGL, and it's hard to argue with how up-to-date their OpenGL extensions are for current hardware.

pudman
06-12-2008, 03:02 PM
There's no reason to be optimistic that this bodes well.

I hope Kronos is prepared to address this at BOF.

Korval
06-12-2008, 03:32 PM
The path I advocate will produce some "multiple ways to do stuff". That problem (if it is really a problem) can be solved by removing old extensions once the functionality has made core or ARB status.

The problem ultimately with having multiple ways to do the same thing is cross-compatibility. The use of glslang in OpenGL has some stupid legacy things built into it. For example, you cannot just let the implementation pick the vertex attribute indices for all of your generic vertex attributes. If you're not using glVertex, you must identify a generic attribute as "attribute 0." Why? Because the immediate mode API for rendering requires it, since only attribute 0 will provoke the rendering of the vertex.

So long as the immediate mode API is a real part of OpenGL, it's very hard to argue against this requirement. You'd basically be saying that you can have shaders or immediate mode, but not both.

This is, of course, more of an annoyance than a real problem, but this kind of stuff is everywhere in OpenGL. Some of it is just API annoyances, and some of it is a real performance or functionality problem.

Take FBO as an example. The only way to know if a particular setup will actually work (for IHV-defined reasons, not poor use of the API) is to try it. With live texture objects and everything. And even if it doesn't work, you don't know why or how to fix it.

Right now, implementations have been known to recompile shaders (as a performance "optimization") if you set certain uniforms to certain values.

At some point, the quick fix just can't cut it anymore. Believe me, I've worked in codebases that have lived for far too long without an overhaul. Sometimes, you have to take the old code out into the backyard and put it out of its misery entirely. Microsoft did this several times before settling on the basic structure of D3D8's API.

Yes, it is painful to do, and yes, the ARB dropped the ball big-time on getting it done. But the decision to redo OpenGL from scratch was necessary and correct.

Also, we shouldn't discount the "ease-of-use" factor. While Id Software may have enough people and enough of an emphasis on graphics to learn the fast way to do their kind of rendering (and, let's be honest here: if it isn't the fast path now, IHVs will make it the fast path, so you can't be terribly worried about it), not every developer has the will or capability to do so. When a developer has to pick through dozens of extensions for doing something, they're liable to get it wrong. And that doesn't help them use OpenGL, since they're seeing the bad end of the API.

You are correct in this: the complete (and sudden) lack of communication indeed does not bode well.

We ought to expect an implementation (not just a spec) to come out of SIGGRAPH, but there is literally no rational way we can expect that. We ought to expect that Mt Evans has been folded into Longs Peak, but again, it would be silly to expect that. Going into SIGGRAPH, the thing we should most be looking for (an explanation) is the thing we're least likely to get.

pudman
06-12-2008, 08:42 PM
There are still a *lot* of people NVIDIA that care about OpenGL

Mind if I ask why? From an IHV perspective, what excites one about OpenGL? Sure, it broadens support across platforms, but really, why OpenGL? For those that *care* about it, wouldn't it be advantageous to *do* something about it?

NVIDIA is one of those places that could toss out another cross platform API and drive a stake in OpenGL's heart. What rationale would they have for *not* doing this, assuming that GL3 is also not code-compatible with pre-3.0 versions.

It could be like a large "GL extension" -- A whole new API that is GL3 in spirit that in the future Kronos could adopt as GL3.

Korval
06-12-2008, 10:04 PM
What rationale would they have for *not* doing this, assuming that GL3 is also not code-compatible with pre-3.0 versions.

What good would it do? Apple's not behind it, so nVidia will still have to support their AppleGL implementation. Linux doesn't matter very much to nVidia. And it would be equally as ignored on Windows as OpenGL, if not moreso because of the lack of ATi and Intel support.

The simple fact of the matter is that no API is useful without support. And unless nVidia, ATi, and Intel all decide to support an API, then it is of no value.

As far as GL 3's "code-compatibility", it isn't. At least, it wasn't last year, when we heard anything. GL 3.0 is a new API; there was said to be a backwards compatibility API that would allow you to use certain select GL 3.0 objects in GL 2.x rendering contexts. But that's all.

pudman
06-13-2008, 12:12 AM
And unless nVidia, ATi, and Intel all decide to support an API, then it is of no value.

I was drawing on the idea started by CUDA, which is why I suggested that nvidia might be in a position to create an new API. Sure, CUDA is for a different market, but what about Cg? Nvidia seems to like creating these thing that can exploit their hardware, moreso than AMD.

If nvidia left GL support as-is (w/ possible bug patches) and came out with a new API that *nvidia* support, cross-platform, that had all the nifty features of the theoretical 3.0, why *wouldn't* you use it? Because it's not supported by AMD? That so far hasn't stopped people from developing with nvidia's "SM4.0" extensions.

There's a need for a cross-platform well-crafted graphics API. That's one of the reason for this long topic. What if someone (or something) provided it other than OpenGL/Kronos?


As far as GL 3's "code-compatibility", it isn't.

That's what I said I was assuming, just in case the argument arose that this mythical new API wouldn't have a nice legacy installed base like GL does.

Basically I'm just blabbing out loud, and fully expect to be disappointed with whatever happens. It's much more healthy to have really low expectations so you can always be surprised in the end.

Korval
06-13-2008, 01:44 AM
Sure, CUDA is for a different market, but what about Cg? Nvidia seems to like creating these thing that can exploit their hardware, moreso than AMD.

nVidia can create whatever it wants, but Cg (despite their attempts to force Cg on GL users) is still not widely used. That's because neither ATi nor Intel support it, and using Cg with either of those two pieces of hardware produces sub-optimal drivers and doesn't expose the hardware features (since they don't extend ARB_vp/fp with new features).

Timothy Farrar
06-13-2008, 05:19 AM
Sure, CUDA is for a different market, but what about Cg? Nvidia seems to like creating these thing that can exploit their hardware, moreso than AMD.

nVidia can create whatever it wants, but Cg (despite their attempts to force Cg on GL users) is still not widely used. That's because neither ATi nor Intel support it, and using Cg with either of those two pieces of hardware produces sub-optimal drivers and doesn't expose the hardware features (since they don't extend ARB_vp/fp with new features).

Somehow I doubt Cg isn't widely used. Just look at the PS3, and take into account developers writing cross platform shaders. It is near working with just a re-compile for DX (with a few ifdefs). Not to mention the ability to simply re-target.

sqrt[-1]
06-13-2008, 05:48 AM
I think Quake Wars (the last high profile OpenGL game) used Cg for some of it's shaders....

bertgp
06-13-2008, 06:15 AM
Right now, implementations have been known to recompile shaders (as a performance "optimization") if you set certain uniforms to certain values.


Really? Which ones particularly? This could explain some performance glitches I encounter sometimes. Is there any documentation/release notes/anything explaining this in more details?

Thanks in advance!

cass
06-13-2008, 06:18 AM
Having and retaining a legacy installed base may be the primary reason that OpenGL is still around today. Certainly it is one of the very important reasons. Blaming OpenGL's woes on the burden of legacy support is lazy though. Legacy support isn't why it took unacceptably long to get render-to-texture support. It isn't why the API fast path isn't always obvious. It isn't why implementation quality varies dramatically among vendors.

OpenGL has problems that need solving today, but not problems that absolutely require a from-the-ground-up API rewrite. The nail that is sticking out the highest right now is API overhead relative to consoles. Even within that space, the lowest hanging fruit is concentrated on a small set of API calls (shader binding and parameter setting, texture binding, vertex buffer input configuration, and a handful of misc state like blend mode and depth func). I am going to spend my energy hammering that nail until another nail is sticking out higher. I will be forward looking, but I cannot accept the cost of being paralyzed by trying to make everything perfect in a single incompatible iteration.

If you can't get there in steps, you may not be able to get there at all. And maybe you shouldn't.

Regarding Cg, "force" is a pretty strong word. What we wanted to do was provide a single source language that would work on all platforms and APIs. At id, the same shader source compiles for xbox360, ps3, and PC, - even though the 3 APIs are all different - and the reason it does is because you can write source compatible Cg and HLSL. For ATI and Intel (and maybe even NVIDIA) PC platforms, we will use the Cg GLSL profiles. A single source language eliminates a lot of development headaches. We chose to use Cg rather than invent our own. It's interesting that isolating the API is a lot easier than isolating the shader. It's because the shader is content, which is something a lot of OpenGL implementors still don't get. Just like vertex array data and image data, shaders need to be API agnostic.

Mars_999
06-13-2008, 06:58 AM
I am starting to get the idea that, if they say anything about GL3.0 at Siggraph, it will be something along the lines of "We aren't going to redo 3.0 into a new API but fix some of the broken items and clean up the API, but not use this new Object Model"? I guess I don't care about the whole new coding interface if they do this, but I want to see the nvidia extensions they have for GF8 put into the core GL3.0 spec...

NeARAZ
06-13-2008, 07:03 AM
nVidia can create whatever it wants, but Cg (despite their attempts to force Cg on GL users) is still not widely used.
Is GLSL widely used? No, because OpenGL is not widely used in the first place; and then if someone does use OpenGL, they stick to fixed function or ARB vertex/fragment programs (which Cg can target just fine).

So whether Cg or GLSL is more widely used - hard to say. Anyone has the data?

We at work use Cg just because we can cross-compile to ARB vertex/fragment programs and Direct3D assembly shaders. We don't compile it to GLSL profiles just because GLSL is too unstable to be used in real life (in our target market) and has a host of other problems (shader loading speed, anyone?)

knackered
06-13-2008, 09:54 AM
Just like vertex array data and image data, shaders need to be API agnostic.
Uhuh. I hear you brother. Analogous to every type of car engine having their own special mix of petrol. Get rid of the built in uniforms and there's no reason for API specific shader languages.

pudman
06-13-2008, 10:07 AM
What we wanted to do was provide a single source language that would work on all platforms and APIs.

And I believe that was one part of the rationale for nvidia to create Cg. The other was (most likely) how unsuited GLSL was to the task as well as its deficiencies in expressiveness.


If you can't get there in steps, you may not be able to get there at all. And maybe you shouldn't.

This is true when you talk of GL2.0. Can you retrofit it with more extensions etc until it gets to a level of capability like a theoretical GL3? No. And it would be a mess.

There are of course risks in designing a new API but the question would also be: is it worth it? I think we're in agreement on this forum that GL3 *would* be worth it. That's not to say it would be adopted by everyone tomorrow or would even be close to "perfect" in the near future but it's got to start somewhere.

And so, as with Cg, a vendor could release a custom API that meets the needs of developers *now* and lobby for it to be standardized in the future. At least it would be out there and have the (potential) advantage of not being designed by committee.

knackered
06-13-2008, 11:28 AM
Well, I suppose it would be possible to write GL3 on top of the existing GL API + extensions. Make it all inline. Korval has apparently written an abstraction matching the proposed new object model.

MZ
06-13-2008, 02:32 PM
Sometimes, you have to take the old code out into the backyard and put it out of its misery entirely.
So, you are calling for... euthanasia for GL 2.x

Let's pray GL 3 won't get an abortion.

Korval
06-13-2008, 07:38 PM
Blaming OpenGL's woes on the burden of legacy support is lazy though. Legacy support isn't why it took unacceptably long to get render-to-texture support. It isn't why the API fast path isn't always obvious. It isn't why implementation quality varies dramatically among vendors.

For some of these, it undeniably is. Any time you have multiple ways to do the same thing, there will be a faster way and a slower way. Sometimes this will change based on circumstances, but with a given set of circumstances, there is a right way and a wrong way. The more ways you have, the less likely it is that the way you choose will be the right way.

Some people have the time, money, and will to sort through all the possible circumstances, permutations, and implementations to find the right way. Some are Id Software, who has enough clout to demand that their way is the right way, and the implementers will bend over backwards to make it so. I well remember the days when nVidia had two OpenGLs. The "I render exactly like Quake 3" version which was fast, and the "I don't render exactly like Quake 3" version that was not-so-fast (admittedly better than the ATi version, which looked at the executable's name). If there had been only one way to render, everyone would have been fast.

Most people don't have the luxury to do that. They need an API that does what it says, does it as fast as reasonably possible, and is reasonably stable.

On implementation quality, you're right, partially. It's undeniably true that there is a matter of choice and will involved. That is, if you don't care about OpenGL, then your OpenGL implementation will be crappy (and vice versa). But that doesn't mean that lowering the bar will not help implementation quality. It potentially makes it a lot easier to care about OpenGL when you don't have to support complexities like immediate mode rendering and all of its myriad interactions with other systems.

If GL 3.0 can half the IHV's cost to support an implementation compared to GL 2.x, then its a win. If an IHV only has to care slightly about a GL implementation, there's a better chance that this slight caring will lead to a more functional implementation.

Now yes, all of OpenGL's problems are not due to supporting 10 years of legacy cruft. The ARB is responsible for a lot of things that have little to do with legacy (GL 3.0's lateness being the most recent). But you can't deny that it is a problem that needs dealing with.


The nail that is sticking out the highest right now is API overhead relative to consoles.

Admittedly, that's your nail. People have their own priorities. I imagine for a developer who isn't looking at having a console version of their product, this particular nail is somewhat farther down the list. Not to say that it isn't important, but when I think of the problems with OpenGL, that's not the first thing I think of.


I will be forward looking, but I cannot accept the cost of being paralyzed by trying to make everything perfect in a single incompatible iteration.

It's kinda hard to argue against this point given the current situation. However, the counterpoint is this:

The decision to make a new graphics API and abandon the legacy of GL 2.x did not inexorably lead to the current situation (GL 3.0 being a year late). That is, things did not have to go this way. They did, and it's unfortunate that we'll likely never know why or who was responsible.

A good decision with poor implementation was still a good decision. And had the decision been made back when 3DLabs was pushing for a brand new GL 2.0 API, it might have gone a lot smoother. After all, what was a year's delay back then so long as GL 1.x was still being updated with extensions and such?

The problem now is that GL 2.x can't (technically won't) move forward concurrently with the development of GL 3.0. So any delay in GL 3.0 causes further OpenGL stagnation and irrelevance.


So, you are calling for... euthanasia for GL 2.x

I wasn't calling for anything; that was what we were told was happening with GL 3.0.

Brolingstanz
06-14-2008, 05:53 AM
Sometimes, you have to take the old code out into the backyard and put it out of its misery entirely.

I couldn't agree more. Seems to me Microsoft's ability to essentially reinvent DX every so often keeps the API fresh and close to the current generation of hardware. I don't know enough to suggest that it could or couldn't work the same way with GL in perpetuity, but the idea is strangely appealing. The extension mechanism embodies some of that generational concept, but its value is lessened if we don't have everyone on board at the same time for a speedy departure.

Having to wait for "fully" teased out golden abstractions is a proposition that surely resonates discordantly with all concerned.

knackered
06-15-2008, 12:48 PM
Seems to me Microsoft's ability to essentially reinvent DX every so often keeps the API fresh and close to the current generation of hardware.
I seem to remember this being the main criticism of D3d over the years. To get access to new features you had to completely rewrite your renderer code, whereas with OpenGL you just extended your existing code by using a new extension. With d3d it was very difficult to write an optimal abstraction because the API changed so radically each time (fvf to vertex declarations, for example - it was always possible to have the equivalent functionality of vertex declarations with OpenGL).

Jan
06-16-2008, 03:08 AM
Yes, it was a main-criticism and it still is, because MS reinvents D3D too often. The main-problem being, that it is not extensible, at all. A good API should have a life-time of 5 to 10 years, being always up to date through extensions, but also being completely overhauled once in a while.

D3D is reinvented every few years (though D3D9 DID have a long life-time, but with D3D11 already on the horizon it seems D3D10 won't). OpenGL is reinvented, well, NEVER. That's two extremes, both are bad for developers.

So far i liked OpenGL's way better, because when i started doing 3D graphics i was happy being able to concentrate on learning to write 3D graphics stuff, not having to switch to a completely new API every once in a while. Right now i'm pretty pissed with OpenGL, because even for someone with several years of experience it is simply impossible to predict whether your chosen way of doing things will run well on every hardware / driver / os. It's way too much trial and error.

Jan.

obirsoy
06-16-2008, 11:26 AM
I believe all this mess, somehow is related to OpenCL (http://en.wikipedia.org/wiki/OpenCL).

NeARAZ
06-16-2008, 11:07 PM
I believe all this mess, somehow is related to OpenCL (http://en.wikipedia.org/wiki/OpenCL).
Right... Let's blame Apple for killing OpenGL now! :)

Rob Barris
06-16-2008, 11:57 PM
Just a course correction here -

The introduction of the OpenCL / Khronos effort has no bearing or impact on the ongoing work for GL 3.0. These are independent efforts - the OpenCL effort is just getting started with the introduction of the Apple proposal into the Khronos working group, the OpenGL 3.0 effort is much farther along.

Note that each can be used alone - you can write an app that just uses GL or CL - they do different things. Though often you can think of clever ways to do the same task under both in many cases (do some computation with a GL shader, or with OpenCL code).

The GL 3.0 group is plenty busy, but details beyond just this basic confirmation that the group is still actively working on it, are not something I can/should offer here, we have rules.

knackered
06-17-2008, 12:26 AM
That's good news........I think.

Rick Yorgason
06-17-2008, 01:12 AM
details ... are not something I can/should offer here, we have rules.

Why did the veil of secrecy ever become a rule? Or is that bit of information also within the veil of secrecy?

Zengar
06-17-2008, 02:16 AM
Thanks for the update, Rob!

Mars_999
06-17-2008, 03:32 AM
Ditto, Thanks Rob. I didn't know Blizzard was on the Khronos group?

Xmas
06-17-2008, 04:56 AM
Check the Khronos member lists:
http://www.khronos.org/members/promoters
http://www.khronos.org/members/contributors

pudman
06-17-2008, 02:07 PM
OpenGL should get Back to the Future: "Where we're going there are no <s>roads</s>rules."

Instead they are Stuck in the Past: "...we have rules." (Rob Barris)

knackered
06-17-2008, 03:29 PM
Nothing much more to say on this topic now, until siggraph'08.
Bye all.

zed
06-17-2008, 09:02 PM
only 56 days away

im of the same opinion as i was last year
WRT the gaming field is opengl3.0 really necessary?
its not really bringing that much new to the party that ogl hasnt already (es), none of the game consoles have d3d10/gl3 level.

i feel just like is the case with d3d10, ogl3.0 will just be a waste of time, maybe in 2012 ( or whenever the next batch of consoles come out) it will be relevant, but until then WRT gaming it wont be

CrazyButcher
06-18-2008, 01:06 AM
for gaming GLSL kinda sucks, that's what makes GL2.0 inferior to D3D9 atm imo (unless you use Cg and nvidia low-level stuff) but having no pre-compilation, fast loading of shaders, no limits/standards, no "env.parameters", is really ugly.

pudman
06-18-2008, 07:25 AM
none of the game consoles have d3d10/gl3 level.

The theory is that GL3 "Long's Peak" won't have D3D10 features as part of the core API. So the advantage of GL3 is a "clean" API. Any console that supported GL2 would be capable of GL3 support, assuming of course a driver could be provided.


WRT the gaming field is opengl3.0 really necessary?

In an absolute sense no, it's not required. But it sure would be preferred.

V-man
06-18-2008, 09:29 AM
for gaming GLSL kinda sucks, that's what makes GL2.0 inferior to D3D9 atm imo (unless you use Cg and nvidia low-level stuff) but having no pre-compilation, fast loading of shaders, no limits/standards, no "env.parameters", is really ugly.

That's not a new revelation. These issues became known something like 6 years ago when GLSL was released.

They should have just released ARB_vertex_program2, ARB_vertex_program3, ARB_vertex_program4, ARB_fragment_program2, ARB_fragment_program3, ARB_fragment_program4

Mars_999
06-18-2008, 10:58 AM
What is the problem with everyone hating GLSL, I love it. I prefer it to HLSL. I haven't used Cg, and I haven't had any issues with GLSL so far.

Jan
06-18-2008, 11:30 AM
Me neither. ASM shaders were a nightmare and i hope they will be dropped for good in gl3.

knackered
06-18-2008, 12:43 PM
I don't have any issues with the absence of env.parameters. My shaders can pull any uniform values they want from my CPU-side pool. If the pool variable hasn't changed since the last time that shader instance was used, then it doesn't get uploaded - which is the same case even if they were env.parameters. That's not a valid criticism of GLSL, just a valid criticism of your renderer framework, CrazyButcher.
I too think GLSL is lovely. If it had binary blobs I wouldn't change another single thing about it.

zed
06-18-2008, 01:31 PM
none of the game consoles have d3d10/gl3 level.

The theory is that GL3 "Long's Peak" won't have D3D10 features as part of the core API. So the advantage of GL3 is a "clean" API. Any console that supported GL2 would be capable of GL3 support, assuming of course a driver could be provided.


WRT the gaming field is opengl3.0 really necessary?

In an absolute sense no, it's not required. But it sure would be preferred.
but we already have this 'clean' api, as in opengl2.0 es, perhaps nvidia/amd should be looking at releasing opengles32.dll drivers?

true gl3 is more than just a cleanup but IMO its not really a huge difference.
look at d3d10 how many games are pure d3d10? i dont think theres any, how many are planned (+ its been out for ove a year now) are there any?
true part of the reason is d3d10 is vista only but also i believe the difference from d3d8->9 was larger than the differnce 9->10 (ok perhaps under the hood the difference is the other way but the majority of developers couldnt give a monkeys about whats under the hood)

i believe theres a major change in graphics hardware happening (becoming like CPUs/CUDA/openCL etc) do we really want something set in stone now, when an earthquake is a bout to happen?

CrazyButcher
06-18-2008, 02:17 PM
yes knackered one can do the .env stuff "yourself", it's just more work ;) though you are right when "minimal" approach is taken (ie more stable drivers) then this kind of stuff indeed is better to not be part of driver. I probably too much hoped that the .env stuff was working more efficient for multiple shaders, instead of updating each. But when the driver did do the same, than it might be indeed better and more transparent to be forced to do yourself.

and V-mann, yes they are known and I am sure have been stated multiple times within this thread before.

dletozeun
06-18-2008, 02:26 PM
That's not a new revelation. These issues became known something like 6 years ago when GLSL was released.

They should have just released ARB_vertex_program2, ARB_vertex_program3, ARB_vertex_program4, ARB_fragment_program2, ARB_fragment_program3, ARB_fragment_program4

I agree. One major problem I had with glsl is indexing arrays with variables. Just not being allowed to index arrays in a loop with a variable without unroll it is unusable. This could work on nvidia cards using special profiles but on some ATIs this not allowed, too bad...

knackered
06-18-2008, 02:41 PM
yes knackered one can do the .env stuff "yourself", it's just more work ;) though you are right when "minimal" approach is taken (ie more stable drivers) then this kind of stuff indeed is better to not be part of driver.
After I posted I put some counters in my renderer to find out how much uniform traffic there really is in some of my average scenes, and it would seem it would be much more efficient to have some kind of cross-shader uniform pool on the server side (ie. env.parameters). So I guess I'm saying I take back what I said. ;)

knackered
06-18-2008, 02:47 PM
I agree. One major problem I had with glsl is indexing arrays with variables. Just not being allowed to index arrays in a loop with a variable without unroll it is unusable. This could work on nvidia cards using special profiles but on some ATIs this not allowed, too bad...
If they don't allow it then you should be glad, because their implementation is obviously not going to be very fast anyway. In this case you should have an automatic way of building different instances of the shader with constants instead of variables. There's always work-arounds for these things, and they're usually much faster executing than the naive approach.

dletozeun
06-18-2008, 02:57 PM
In this case you should have an automatic way of building different instances of the shader with constants instead of variables. There's always work-arounds for these things, and they're usually much faster executing than the naive approach.

Yes, it is a good solution, but I guess that it would work in shaders where array indexes depend on uniform or attribute variables or even on a texture lookup. I am sure that it would be possible to use multiple shaders when the array indexes are completely indeterminate out of the shader.

knackered
06-18-2008, 03:41 PM
you could always pass your 'arrays' as textures, of course.

cass
06-18-2008, 04:28 PM
I don't have any problem with GLSL's shading language other than that it is specific to GL.

The GLSL runtime API is way more indirect than it needs to be for high performance use. I think the language would have been easier to adopt if it had been a text drop-in replacement for ASM shaders. Instead, you need to learn a whole new nomenclature for everything.

That was an example of shooting the useful (asm) api path in the head for an unproven and frankly somewhat icky (GLSL) api path that was considered cleaner and better.

pudman
06-18-2008, 05:07 PM
but we already have this 'clean' api, as in opengl2.0 es, perhaps nvidia/amd should be looking at releasing opengles32.dll drivers?

GL2es does seem like a cleaner version of GL2. I personally don't know what might limit it from being used on a PC. Possibly they're taking what they learned with ES and doing that to GL3?


true gl3 is more than just a cleanup but IMO its not really a huge difference.

I'll withhold such declarations until we learn what the actual differences will be.


look at d3d10 how many games are pure d3d10?

Once again, GL3 is not about D3D10 features.


i believe theres a major change in graphics hardware happening (becoming like CPUs/CUDA/openCL etc) do we really want something set in stone now, when an earthquake is a bout to happen?

CUDA/OpenCL should have no affect on OpenGL. Regardless of their computation-capability similarities, GL will always be a rendering/graphics API, not a computational API.

zed
06-18-2008, 09:24 PM
CUDA/OpenCL should have no affect on OpenGL. Regardless of their computation-capability similarities, GL will always be a rendering/graphics API, not a computational API.
ogl has become more general purpose over the years + no doubt the trend will continue.
look at OpenGL es 2.0 it's already gotten rid of lots of rendering stuff eg lighting/transformation etc because the programmer can do this in the shader (+ with a lot more flexibility besides) i can even see them being able to read from the framebuffer ( not glReadPixels, hmmm does this invalidate some timetravel law)

Once again, GL3 is not about D3D10 features.true be u can learn from the failure of d3d10, whilst party due to vista only, i believe its also cause it doesnt offer that much more to the programmer, it shows the dangers of jumping in to quick (d3d11 is just around the corner, no doubt invalidating all d3d10 code)

ATM ild like to see ogl es on the desktop, this way our apps can run on the pc/iphone etc with no code changes at all.
the drivers should be more stable + perhaps even quicker (with all the fluff removed), wouldnt that make a lot of us (game orientated ppl) happier than ogl3.0?

Sik the hedgehog
06-18-2008, 09:36 PM
look at d3d10 how many games are pure d3d10? i dont think theres any, how many are planned (+ its been out for ove a year now) are there any?
Believe it or not, Microsoft actually suggests making games that still can work using DirectX 9 (through they should be able to use DirectX 10 when possible). This may be part because the "everybody switch to Vista" idea failed, but it may be as well because the 360 isn't capable of DirectX 10 (probably it could be done but would run too slow to be feasible - I have to check Alby's DX10 driver though to see if this is false).

pudman
06-18-2008, 10:24 PM
ogl has become more general purpose over the years + no doubt the trend will continue.

That still has no bearing on the relationship between OpenGL and CUDA/OpenCL. Chances are that a starting point for OpenCL is CUDA. What similarities are there between OpenGL and CUDA? Or better, what feature of CUDA do you think should be a core part of GL?


true be u can learn from the failure of d3d10, whilst party due to vista only, i believe its also cause it doesnt offer that much more to the programmer, it shows the dangers of jumping in to quick (d3d11 is just around the corner, no doubt invalidating all d3d10 code)

GL3 is not going to be Vista only, nor will there be a GL4 in two years that rewrites the API. Whether or not it offers significant advantage over GL2 we'll hopefully discover at SIGGRAPH (my faith what was presented in the Pipeline has eroded). Therefore I don't see your comparison as valid.

Sure, it will be a new API and there will be a gradual uptake of usage. But so what? That doesn't mean it shouldn't be done.


the drivers should be more stable + perhaps even quicker (with all the fluff removed), wouldnt that make a lot of us (game orientated ppl) happier than ogl3.0?

What if GL3 drivers proved more stable than GL2 drivers? What if they're planning a GL ES 3.0?

Mars_999
06-19-2008, 11:14 AM
I agree. One major problem I had with glsl is indexing arrays with variables. Just not being allowed to index arrays in a loop with a variable without unroll it is unusable. This could work on nvidia cards using special profiles but on some ATIs this not allowed, too bad...
If they don't allow it then you should be glad, because their implementation is obviously not going to be very fast anyway. In this case you should have an automatic way of building different instances of the shader with constants instead of variables. There's always work-arounds for these things, and they're usually much faster executing than the naive approach.

Hence why unrolling the loop is faster, I used loops on my GF7? cards not sure I have tried it on my GF8 card, but I can tell you there was a slow down using loops.

knackered
06-19-2008, 11:26 AM
only loops involving uniforms - constants (literals/#defines) controlled loops are just as fast as unrolled loops on every nvidia card I've used them on. Having said that, there was a weird GLSL compiler bug a while ago that required me to unroll a loop just to get the damn thing working.

zed
06-19-2008, 12:28 PM
That still has no bearing on the relationship between OpenGL and CUDA/OpenCL. Chances are that a starting point for OpenCL is CUDA. What similarities are there between OpenGL and CUDA? Or better, what feature of CUDA do you think should be a core part of GL?actually CUDA doesnt go far enuf, eg it doesnt support recursion, now its pretty obvious in a year or two this limitation will be removed. the line between cpus/gpus is becoming blurred + ultimately disappear, GL3 should aim for this (futureproof) make it so u can do graphics without restrictions like u can on the CPU, we dont want them having to bring out gl4.0 in a couple of years time


Sure, it will be a new API and there will be a gradual uptake of usage. But so what? That doesn't mean it shouldn't be done
What if GL3 drivers proved more stable than GL2 drivers? What if they're planning a GL ES 3.0?
like the analogy i gave u shouldnt build in the midst of an earthquake.
im sure the main thing wanted by GL users with opengl3.0 is a cleanup/removal of the legacy stuff.
ogles has this already, thus personally i believe is an ideal stopgap + will satisfy a lot of punters

pudman
06-19-2008, 03:20 PM
actually CUDA doesnt go far enuf, eg it doesnt support recursion

The usefulness of recursion would apply to GLSL, not the core GL3. GL isn't GLSL. GLSL is an API for the programmable portion of the pipeline. There still is the underlying API that allows GLSL to have usefulness.


im sure the main thing wanted by GL users with opengl3.0 is a cleanup/removal of the legacy stuff.

Go read the Pipeline(s) again. The main thing they discuss is the new object model. API cleanup isn't even mentioned (I believe it's simply taken for granted). The feature that will be most of interest to developers will be this new object model. Search through this topic (and I think there's another similar one) for comments made by Korval on this subject.

Technically I could care less about a cleanup as it doesn't affect me if they remove functions I don't use.


like the analogy i gave u shouldnt build in the midst of an earthquake.

I'm still not sure what you believe is so uncertain that it would require a rewrite of GL3 were GL3 to be released today. The only thing I can think of that hasn't been discussed in any formal manner is blend shaders. But if we simply take that as an example, how would it be affected by CUDA/OpenCL? I just don't see a relation.


the line between cpus/gpus is becoming blurred + ultimately disappear, GL3 should aim for this (futureproof) make it so u can do graphics without restrictions like u can on the CPU

So you would be arguing for, say, the ability to write an OS in GL once GPUs support that level of capability?

knackered
06-19-2008, 03:50 PM
we'd only be left with one single OpenGL function - glRunProgram.
Anyway, we're not in the middle of an earthquake, an earthquake is merely forecast. If builders took your attitude zed, we wouldn't have the golden gate bridge.

zed
06-19-2008, 03:59 PM
The usefulness of recursion would apply to GLSL, not the core GL3. GL isn't GLSL. GLSL is an API for the programmable portion of the pipeline. There still is the underlying API that allows GLSL to have usefulness.shaders/glsl is GL, theyre replacing most of the fixed function stuff with shaders



Go read the Pipeline(s) again. The main thing they discuss is the new object model. API cleanup isn't even mentioned (I believe it's simply taken for granted). The feature that will be most of interest to developers will be this new object model. Search through this topic (and I think there's another similar one) for comments made by Korval on this subject.

Technically I could care less about a cleanup as it doesn't affect me if they remove functions I don't use.perhaps youre right about ppl being more interested about the objectmodel, but a cleanup affects everyone even u, it makes the drivers leaner + meaner + less likelyhood of containing bugs


So you would be arguing for, say, the ability to write an OS in GL once GPUs support that level of capability?
yes, if they so want + are insane enuf.
the major thing in graphics over the last decade has been the adoption of shaders right? i assume the biggest thing over the next 5-10years is totally programable gpus/cpus. now even if hardware today doesnt support it the api should (ie it should be supported in software), ie make the API as future proof as possible, that was part of the genius of the original opengl


If builders took your attitude zed, we wouldn't have the golden gate bridge.
but we got the original opengl didnt we (ok designed from learning from another API), which does show that forward thinking does sometimes happen

pudman
06-19-2008, 06:28 PM
shaders/glsl is GL, theyre replacing most of the fixed function stuff with shaders

Until they give shaders the capable to load and configure resources (most likely never) there will always be the "core" GL. Even in CUDA, there is a lot to set up (buffers, etc) before the "shaders" can be executed. Neither GL nor CUDA/OpenCL will be purely shaders.


perhaps youre right about ppl being more interested about the objectmodel, but a cleanup affects everyone even u, it makes the drivers leaner + meaner + less likelyhood of containing bugs

I believe one of the advantages of the new object model is also driver "leanness". Even so, less buggy software doesn't directly affect me as a developer except tangentially in debug time. Therefore, I would get the same affect as an API cleanup as simply a less buggy driver. So there's no real advantage to be as purely a developer.


yes, if they so want + are insane enuf.

I'd prefer them to stick to enhancing C/C++ with CPU capabilities (e.g. CUDA libraries) than pollute a graphics API with non-graphics cruft. Remember that the only reason people used GL for general purpose coding was because there simply was no other API to utilize the GPU. Now there is so as developers we can easily take advantage of graphics and GP coding in their own dedicated APIs. That separation is really a Good Thing.


i assume the biggest thing over the next 5-10years is totally programable gpus/cpus.

I personally believe that the abstractions for graphics and general purpose programming are diverse enough that it will always justify dedicated APIs. There will likely be some similarity in various parts (a shader can do a texture lookup/a kernel can do a memory access) but there won't be one-size-fits-all API.

Timothy Farrar
06-20-2008, 09:04 AM
Seems to me that the primary point of the "cleanup" and "object model" is so that GL3 is a better/fast match for GPU command buffer creation. Throw in good threading support (ability to easily generate parts of the command buffer in separate threads user side), and better control of gpu<->cpu data transfers, and GL3 could be quite impressive.

As for those who strongly believe that a graphics API doesn't need "compute shaders" or general purpose CUDA like functionality, IMO you are dead wrong.

What you want is for graphics functionality to keep on scaling with the hardware, at some point (arguably almost now) display resolution/AA processing requirements will begin to taper off, and performance can again be applied to something beyond just pushing more pixels. Obvious places: more complex 2ndary effects (reflection maps, realtime GI/envmaps, etc), in triangle fragment level raytracing (ie cone tracing, etc), in GPU LOD/PVS, post processing, etc. Many of which require GPU side GP computing to not be bottlenecked by either CPU or GPU<->CPU data transfers.

If you look at the current evolution of GPU functionality this point should be obvious. Why texture arrays, why transform feedback, why geometry shaders, why integer math, why R2VB, etc. GPGPU computing is extending what we can render on the GPU in a huge way.

The only reason this isn't mainstream yet is because a majority of developers are targeting consoles which have no or limited support for this functionality, and PC is currently effectively a dead platform for DX10 ONLY graphics engines because of a lack of market penetration of Vista and PC game piracy (what publisher is going to fund something expensive for a tiny target market).

dletozeun
06-20-2008, 11:36 AM
GPGPU computing is extending what we can render on the GPU in a huge way.


Yes, but all examples you have taken are for graphic purpose, not very general.

I think that opengl should always be an API for graphic purpose, if you want to do other things use CUDA or OpenCL, we don't need to weigh down opengl with it.

pudman
06-20-2008, 05:38 PM
Yes, but all examples you have taken are for graphic purpose, not very general.

This is OpenGL we're talking about so they should be related to graphics.


As for those who strongly believe that a graphics API doesn't need "compute shaders" or general purpose CUDA like functionality, IMO you are dead wrong.

Just to be sure, I was arguing that OpenGL's evolution as an API should not be dependent on a CUDA/OpenCL API. Just because a GPU can do GP programming doesn't mean it not have an API that specializes in rendering.


if you want to do other things use CUDA or OpenCL, we don't need to weigh down opengl with it.

Because of the similarities of computations (GL and CUDA are using the same hardware) you will likely always be able to do GP in GL. The more programmable/extensible GL becomes the more this will be true.

Zengar
06-20-2008, 05:42 PM
After thinking a bit... I guess the best course of action would be to have a graphics API, which abstracts important graphics principles like primitive setup, depth test etc. with a general GPGPU API in paralell. As already discussed, it would be rather difficult to write an own optimal rasterizer using GPGPU API. I really like the idea of OpenGL and OpenCL working in paralell, where you can share some data structures (buffers and textures) and use both approaches where appropriate.

Mark Shaxted
06-20-2008, 05:51 PM
Yes, but all examples you have taken are for graphic purpose, not very general.

I think that opengl should always be an API for graphic purpose, if you want to do other things use CUDA or OpenCL, we don't need to weigh down opengl with it.

I disagree. Graphics cards are no longer 'graphics cards'. They're extra computers added into a slot on your motherboard. I think it's about time we treated them as such.

The next 'next gen' API has to address this fact - and open up all the possibities that GPU's have to offer, namely massive computational power. GL3.x won't be doing this, which is a great shame. I for one would love to see all future GPU's provide exactly the same precision/results [ie 32 bits - not 24] as current SSE. I would love to see GPU's as a natural extension to multicore processors. At the end of the day, having multiple CPU's/GPU's on which to work, each of which is specialised in certain areas, is a great thing to have. If we can improve the byte throughput overall, then everyone is happier.

bobvodka
06-20-2008, 06:32 PM
*grumble* OK, yes, I know I said I was out however I feel the need to come back and address an important point which has apprently got lost in the shuffle; OpenGL is a Specification for a Graphics API.

This is a fact which has been the corner stone of OpenGL.

GPGPU is, by definition, not directly graphics related. It is general purpose and as such requires a general purpose API.

OpenGL should stick to what it is designed for; providing a cross platform graphics API.

Right now the underlaying hardware, certainly DX9 hardware, isnt this fully programmably GPGPU wonderland where we can reimpliment everything the driver programmers do for us (</sarcasm>).

The API should reflect the hardware AND perform it's stated purpose; to be a graphics API.

In time, the hardware will change, at which point (shockingly I know) an API change will be required. OpenGL needed the API change back at 2.0, but this was fumbled by the ARB at the time, and since then the hardware and how OpenGL interacts with it has been drifting further apart. Stability is all well and good but not when it comes at the cost of performance.

However, many pipeline changes will probably be expressable as extra programmable pipeline stages (something I believe the original GL3.0 teaser would allow for), so you'd get your programmable blender, programmable tessalator, and these just drop into the pipeline.

If you want a GPGPU API then go use one, but OpenGL should be left as a Graphics API, regardless of what guesses people have about the future; designing an API now for hardware which may not exist for another 5 years would just be insane and cripple the API now when it was most needed.

I now fully expect a series of rants about how OpenGL should embrace GPGPU and other such rubbish... as you can probably tell by that sentence you won't convince me and I've already decided your arguements are ill formed and dumb. Save us all a hastle and don't bother writing them, the chances are you'll fail at critical thinking while doing it anyway simple because you want to push your pet intrest into a place it has no business being.

Mark Shaxted
06-20-2008, 07:03 PM
to bobvodka...

Personaly, I couldn't give a rat's arse about 3D features. I haven't played a game on a PC for about 10-15 years. I have a commercial interest in a high perforance graphics API, which isn't specific to Windows. This means OpenGL. All logic says go the D3D route - but I really WANT to use OpenGL - simply for cross platform support, thereby avoiding multiple rendering paths.

I'm writing an image processing library (think digital SLRs) which needs every performance advantage possible (GPU). I've already seperated my app into UI (which runs on the CPU) from GUI (which runs on the GPU). The last 5 years of GPU's are very capable of doing this - yet the API's seriously lag behind. This is why I feel the current API's need to be incorporated into one GPU API. With my sentiments above, even a spreadheet/word processor could be GPU enabled - and this IS the future of high performance computing.

I'm well aware that modern GPU's are game/scene graph biased. But they don't NEED to be. The future is in exposing every tiny performance characteristic thay have. And OpenGL should/must be part of that.

From MY point of view, there are two perfectly capable API's which handle each part of MY equation - but none yet exist which adresses both in the same API. And that could be the future of OpenGL. Open Graphics Language - NOT 3D gaming language. NOT medical science language. NOT anything else. Just an open, cross platform generic graphics API.

zed
06-20-2008, 07:36 PM
some are arguing openGL should be for graphics right,ok fair enuf, but by graphics u in fact mean the subset 'z buffered shaded polygon rasterization'
ie ignoring other graphics methods voxels,raytracing/radiosity etc

pudman
06-20-2008, 07:45 PM
Personaly, I couldn't give a rat's arse about 3D features.

So we can drop that z component in all of your code? Sweet!

Seriously, it sounds like you're coming from a pure CUDA camp. If you want to treat the GPU as purely a co-processor, by all means, do so. If you're doing graphics that will at some point be rendered, OpenGL seems like it would be good fit. If you're "processing images" where throughput matters more than framerate then maybe all you need is OpenCL.


Just an open, cross platform generic graphics API.

You've completely confused me. You want a graphics API with no 3D features? Just vanilla/generic "graphics" stuff? What would this API look like and how is this at all related to OpenGL?


With my sentiments above, even a spreadheet/word processor could be GPU enabled - and this IS the future of high performance computing.

Very likely. But what does that have to do with OpenGL?

Mars_999
06-20-2008, 08:06 PM
only loops involving uniforms - constants (literals/#defines) controlled loops are just as fast as unrolled loops on every nvidia card I've used them on. Having said that, there was a weird GLSL compiler bug a while ago that required me to unroll a loop just to get the damn thing working.

Thanks for clearing that up! ;) It's been awhile since I have used loops in the shaders, so I couldn't remember which situations warranted it.

Jan
06-21-2008, 01:24 AM
bobvodka: Thanks stating my opinion exactly!

I think the whole discussion is rubbish, since most people here confuse "the API" with "the hardware". OpenGL is one specialized (!) API. It is meant for graphics, nothing else. Now in the early days API and hardware were a perfect match. That's changed a long time ago. However, that doesn't mean OpenGL should expose the hardware as close as possible. It still is a SPECIALIZED API FOR GRAPHICS!

A few years people have been abusing OpenGL to do OTHER SPECIALIZED tasks with greater performance. Using OpenGL to do such stuff is IMO idiotic, but who can blame them, there was no other choice: Do it GP on the CPU but slow, or abuse a graphics API and do it fast.

Now even that is changing, at last! GPUs have become so GP internally, that it now makes sense to expose their capabilities and give people more control. That is what CUDA, CTM and OpenCL are for. And the decision by IHVs to introduce new APIs for this task is great! They clearly understood, that doing GPGPU stuff is a pain, doing it through OpenGL and it is a pain for GRAPHICS programmers, if their API drifts away from its meant purpose more and more. Nobody wins.

Just see it this way: Having something like CUDA to FULLY exploit ALL features your big fat power hungry co-processor has, will be (part of) the future. THAT is the way to do whatever you like with it. That is the C++ for GPUs.
However, OpenGL (and D3D) will now not be the ONLY APIs to command the GPU (as it was, when GPUs were not more complicated than my washing machine), but they will provide you with a SPECIAL VIEW on the hardware. They are now a HIGH LEVEL abstraction, meant for a specific task (graphics) and nothing more.

Clearly, what we will definitely need (CUDA does allow that already to a certain degree) is to use OpenGL and OpenCL in parallel and to be able to map OpenGL buffers and textures in OpenCL to allow to use one "view" (OpenGL) to do all graphics stuff, and to use another "view" (OpenCL) to do stuff that is general purpose (OR in cases where using OpenGL is just to much work, for example when doing complex image-processing, where several rendering-passes, setting up FBOs, etc. are simply not efficient enough).

So, what i propose is a CLEAR cut between the two APIs, which benefits OpenGL, by not making it a half-baked solution for some GPGPU stuff, but really only limits it to graphics. But then allow to interchange data between the two, to allow developers to always solve their problems with the API that fits their needs the best.

Jan.

knackered
06-21-2008, 04:26 AM
mark, on the one hand you want to be able to treat the GPU like a CPU, and on the other you want OpenGL to become some kind of monstrous GDI-alike?
Why hijack OpenGL for this? Why not just champion some kind of GPU Open Graphics Utilitiy library containing stuff like drawCircle()? The back-end of which could be implemented in CUDA or OpenCL.

OpenGL has quite rightly become a way of getting data across the bus to the GPU - primitive/vertex/texture/shader/drawcalls. That's all I use in OpenGL in its current state - so the rest of the crap can go. That's all we're asking for.

pudman
06-21-2008, 08:42 AM
I was just hunting the Kronos page for more info on OpenCL and found that nvidia is not on the (initial) OpenCL working group! Very interesting indeed. So I guess I'll stop using "CUDA/OpenCL" as though they were completely interchangeable.

Announcement (http://www.khronos.org/news/press/releases/khronos_launches_heterogeneous_computing_initiativ e/)

Zengar
06-21-2008, 09:03 AM
But it is: "Initial participants in the working group include 3Dlabs, AMD, Apple, ARM, Codeplay, Ericsson, Freescale, Graphic Remedy, IBM, Imagination Technologies, Intel, Nokia, *NVIDIA*, Motorola, QNX, Qualcomm, Samsung, Seaweed, TI, and Umeå University".

Still, it is Apple who proposed it (and they have superb compiler tools with LVVM).

dletozeun
06-21-2008, 09:08 AM
I disagree. Graphics cards are no longer 'graphics cards'. They're extra computers added into a slot on your motherboard. I think it's about time we treated them as such.


I did not say that GPUs are for graphics only, I totally agree with the idea of using them for general purpose, but not in openGL which is for me an API that should be used for computer graphics. But I don't reject the idea to use GPU for graphics and general purpose in parallel like Zengar said, I think it is a good thing and the future.
Like Jan said, I think that you confuse API with hardware Mark.



some are arguing openGL should be for graphics right,ok fair enuf, but by graphics u in fact mean the subset 'z buffered shaded polygon rasterization'
ie ignoring other graphics methods voxels,raytracing/radiosity etc


I was not thinking like this, but finally you are maybe right. Now, I don't think that it is a good idea to integrate all this rendering advanced stuff in Opengl, because there is plenty of papers that propose all different techniques which are specific to the final result that you need, hardware, etc... It would be finally not possible to satisfy most of the current opengl users.

I think it would be better to code them with with GP APIs like CUDA or OpenCL and use these ones in parallel with OpenGL.

cass
06-21-2008, 09:20 AM
OpenGL is and should remain a graphics hardware abstraction. What that abstraction should look like changes as hardware evolves. If the hardware becomes more flexible (i.e. with shaders), the abstraction should expose that. If the hardware is capable of interacting with non-graphics abstractions like compute, then the graphics abstraction should support interaction with other abstractions.

Abstractions are still important, even on fully general purpose processors, which is why we have and use APIs even when they're not directly hardware accelerated.

As long as OpenGL continues to provide a meaningful graphics abstraction that facilitates hardware acceleration, I think we are doing well. It shouldn't try to be more than a graphics abstraction, but it shouldn't make it difficult to cooperate with other abstractions when the underlying hardware doesn't make it difficult.

Seth Hoffert
06-21-2008, 11:28 AM
Where is the line drawn? Transform feedback seems pretty GPGPU to me, but cannot do everything CUDA can do. When should one use transform feedback over CUDA (ignoring the performance hit when mixing CUDA and GL)? (Is the transform feedback extension even supported on implementations other than NVIDIA's?) :(

pudman
06-21-2008, 02:05 PM
But it is: ...NVIDIA...

D'oh! I had Match Case selected when I was searching. And for the life of me I just can't read.

zed
06-21-2008, 03:04 PM
Has anyone got any links of what's now done with graphics hardware.

Like before I think there was hardware devoted to the fixed function pipeline perhaps lighting,transformation etc as long as it followed the rules.
But now its all much more flexible, GPUs now are just basically number crunchers.
eg what about blending? is there special hardware that deals with this nowadays?

(excuse my terrible explaining)

personally i believe GPUs are fast enuf today (well of course theyre never fast enuf), but similarly like to CPUs once they got to ~1-2ghz it hasnt been so important to upgrade as it once was, its now time to utilize/be creative with that power.

what im suggesting is having an barebones API that exposes everything + is blindingly quick, perhaps like CUDA.
and then have opengl layered over the top (perhaps opensource).
ok sure theres gonna be a performance drop but will be forward thinking + easily extensible.
the developer then ships whatever opengl version they built it with + ship it with there application.

bobvodka
06-21-2008, 04:18 PM
OpenSource is a horrible idea, more so if it ends up as GPL as that viral licence is a poxy on the programming industry.

Also, the other problem with your idea is that there is no standard barebones API; NV have CUDA which is C-like, AMD have CTM which is assembler like and Intel have.. well.. nothing. So, not only do you have to write the whole thing but you have to write two backends (or more depending on how hardware changes) and you lose a whole segment of the market.

Finally; performance drop.
Unacceptable.
Things might be 'fast enough' for you but they aren't for everyone; you tell them 'hey, OpenGL is now just a wrapper over another API and it is slower than it was' people will lol and wander off and use something else (be it D3D, an older OpenGL or just write their own stuff).

But, you know what, instead of me telling you what a bad idea it is why not just go and do it? Put your money where your mouth is; all you guys who think it should be done this way go out and use CUDA (well ignore the fact it only works on NV GF8 and up hardware for now) to make a proof of concept API. A few of you think it's a good idea, shouldn't be too hard to find a few others to help out... go on... we'll wait.

Jan
06-21-2008, 05:03 PM
Open-source does have its benefits. But it is not the holy grail, that some people believe it is. Making OpenGL (or any such API) an open-source project layered upon some other API is such a stupid idea. We did have this very discussion already IN THIS THREAD and i already made my points, why it wouldn't work, so if you ain't got any new arguments, please stop bringing this up again and again.

Somehow these open-source-lovers always remind me of vegetarians, but i really don't know why. Maybe it's the boneheadedness. I love my juicy pork and i love my closed-source high-performance drivers, deal with it.

Jan.

Mark Shaxted
06-21-2008, 06:58 PM
OpenGL is and should remain a graphics hardware abstraction. What that abstraction should look like changes as hardware evolves. If the hardware becomes more flexible (i.e. with shaders), the abstraction should expose that. If the hardware is capable of interacting with non-graphics abstractions like compute, then the graphics abstraction should support interaction with other abstractions.

Abstractions are still important, even on fully general purpose processors, which is why we have and use APIs even when they're not directly hardware accelerated.

As long as OpenGL continues to provide a meaningful graphics abstraction that facilitates hardware acceleration, I think we are doing well. It shouldn't try to be more than a graphics abstraction, but it shouldn't make it difficult to cooperate with other abstractions when the underlying hardware doesn't make it difficult.




#1 Personal note to self - don't post when under the influence of red wine ;-)
#2 I've been accused of not wanting 3D graphics. Not true - it's just not what interests ME.

To cass - How exactly do you define the abstraction? From MY POINT OF VIEW I need a CUDA/OpenGL combination. I need an openGPU API. I want SSE + GPU math to be equivalnet. I want to use the GPU to augment the CPU. And I want it to be cross platform.

The thing is, most of what I want is actually currently available in modern GPUs. It's just that no API can deliver it. Hence my suggestions for a future OpenGL which can facilitate this.

Finally, not everyone in the world wants a graphics API to maximise 3D performance. 2D + pixel processing/colour management is important too. And more relavent to many people.

Mark Shaxted
06-21-2008, 07:11 PM
quote]
I disagree. Graphics cards are no longer 'graphics cards'. They're extra computers added into a slot on your motherboard. I think it's about time we treated them as such.

I did not say that GPUs are for graphics only, I totally agree with the idea of using them for general purpose, but not in openGL which is for me an API that should be used for computer graphics. But I don't reject the idea to use GPU for graphics and general purpose in parallel like Zengar said, I think it is a good thing and the future.
Like Jan said, I think that you confuse API with hardware Mark.
[/QUOTE]

No - I really don't confuse hardware with API. I think you confuse API with functionality. We've now reached a point where API's SHOULD access all current hardware. But they don't - not even close.

zed
06-21-2008, 07:33 PM
#1 Personal note to self - don't post when under the influence of red wine ;-)if i stuck to this rule my post count would be around 50 (+ the forum would no doubt be richer for it)

WRT performance - with nvidia (+ i assume AMD et al) this is similar to what they do already with glsl or cg or HLSL, ie the shader source gets converted into machinecode, im pretty sure opengl + d3d operate in a similar fashion ie u shouldnt see any or bugger all performance drop.
also is performance that important? look at d3d9 vs opengl, d3d9 is ~5% slower, yet lots of ppl used it. same story with computer languages.

also im certainly not a opensource zealot (personally i think stallman etc are pillocks) but its a saftey issue, look at two "graphics" apis mesa + sdl but have been going 10+ years, if they were closedsource theres always the danger that when the creator loses interest that the thing will die, case in point 'glut'

still all this chewing of the fat is just entertainment as whatevers gonna happen with gl3.0 has long since been decided

bobvodka
06-21-2008, 07:44 PM
quote]
We've now reached a point where API's SHOULD access all current hardware. But they don't - not even close.

So, what functionality do you think is missing?

bobvodka
06-21-2008, 07:53 PM
WRT performance - with nvidia (+ i assume AMD et al) this is similar to what they do already with glsl or cg or HLSL, ie the shader source gets converted into machinecode, im pretty sure opengl + d3d operate in a similar fashion ie u shouldnt see any or bugger all performance drop.


Erm, yes you would. You are sitting an API ontop of another API. Your calls are going to involve doing more work translating from your new API=>CUDA which then talks to the graphics driver and has to do more work.

This has nothing todo with shaders; shaders are going to be the same whatever goes on, this is about how the data gets to the graphics card in the first place.



also is performance that important? look at d3d9 vs opengl, d3d9 is ~5% slower, yet lots of ppl used it. same story with computer languages.


Productivity and tools is key here.
The reason people use D3D, as stated earlier in this thread, is tools tools tools.
D3D9 is also only slower on small batch sizes, this the invention of instancing and the push for more polygons per batch. Once you get over a lower limit of triangles per batch the difference becomes overshadowed.

When it comes to graphics and talking to the hardware you don't want layers and layers in the way, the idea is to abstract but stay as common as possible. With your method you have a constant cost and overhead, unless you feel like writing a backend for CUDA, CTM and whatever Intel bring out in the future. Unlike dropping a bit of C/C++/assembler into a program to take care of a bottleneck this is a non-trival thing.

Your idea is like telling someone they have to produce a program for a Power PC and an Intel chip; they can either use Python or assembler but no inbetween. You either take the speed hit or waste man hours. OpenGL and D3D are designed to be the C equivlant; maybe it would be faster to talk directly to the hardware but this is more portable.

It remains a dumb idea.

pudman
06-21-2008, 08:25 PM
From MY POINT OF VIEW I need a CUDA/OpenGL combination. I need an openGPU API. I want SSE + GPU math to be equivalnet. I want to use the GPU to augment the CPU. And I want it to be cross platform.

Maybe we should approach this from a different direction...

What features in OpenGL do you use? You say you don't care about 3D, just 2D + pixel processing. So what's preventing you from straight CUDA? Possibly in the future this new "OpenCL" this will be exactly what you need.


It's just that no API can deliver it. Hence my suggestions for a future OpenGL which can facilitate this.

Are you just suggesting OpenGL be that API simply because you're familiar with it? How about, instead of your wish of GL3.0 included what features you require, you hope for OpenCL to be your Everything?

This is a silly discussion.

dletozeun
06-22-2008, 04:08 AM
No - I really don't confuse hardware with API. I think you confuse API with functionality. We've now reached a point where API's SHOULD access all current hardware. But they don't - not even close.


Please stop drinking. ^^

Do we have a single API to access all the current CPUs hardware? No. Why would it be different on GPUs?

pudman
06-22-2008, 06:46 AM
Do we have a single API to access all the current CPUs hardware?

Assembly language.

dletozeun
06-22-2008, 10:16 AM
Assembly language.


This is not an API and it depends on hardware.

pudman
06-22-2008, 12:37 PM
Assembly is an API, just a low level one. And of course it depends on hardware. You said there wasn't an API that accessed all current CPU hardware and assembly can do that. It's (kinda) cross platform if you're coding x86 hardware too!

I assumed you were talking about CPU's SSE instuctions and the like.

bobvodka
06-22-2008, 12:47 PM
No, Assembly is a Language, not an API.
And it varies between processor families quite significantly, even basic instructions.

ScottManDeath
06-22-2008, 01:14 PM
C++ is the universal "API". With it, you write code that pretty much everywhere.

bobvodka
06-22-2008, 02:16 PM
no, C++ is a language.

C would be the universal language as everything and it's dog can link to it. (no pro-C bias; I'm mainly a C++ programmer and don't technically know C).

Zengar
06-22-2008, 02:46 PM
Well, assembly can be called an "API" in a very basic way. The instructions are not executed directly with most CPUs, but are broken down in more primitive commands. The x86 architecture can be seen as an "API", with new instructions being "extensions". But this is a bit absurd, from my point of view. An API is way more abstract then that.

Zengar
06-22-2008, 02:50 PM
no, C++ is a language.

C would be the universal language as everything and it's dog can link to it. (no pro-C bias; I'm mainly a C++ programmer and don't technically know C).

But the problem of C is it being almost too low level and based on a particular sequential programming model. In such language, the compiler looses important informatio that can be used to optimize, as the abstractions are harder to capture. Still, if you look what LVVM guys managed to do... like recognizing of memory access patterns and then rewriting the code to be optimal: for example, their algorithms can regognize linked lists (due to circular references) and optimize them to allocate the memory in sequential pattern (basically an array).

CatDog
06-22-2008, 03:00 PM
Has bullshittin' about C-whatever something to do with OpenGL?

This thread was an interesting read for the last ten pages or so. Please keep it up that way.

I've got some questions, because this GPGPU-thing disturbs me.

Graphics cards are no longer 'graphics cards'. They're extra computers added into a slot on your motherboard.
Is this the reason for GPGPU? If true, my feeling is, that the new flexibility of current graphics cards is heavly misused to act like extra computers. Just because it can be done.

Isn't "GPGPU" a contradiction in terms, btw? Doing general purpose stuff on a graphics programming unit sounds a little bit odd, doesn't it? I thought, the (only?) advantage of GPUs over CPUs is their ability to process data in a highly parallelized fashion. Doesn't this mean, their practical use is restricted to algorithms, that can be parallelized? But that isn't exactly what I would call general purpose. (?)

So, I'd prefer OpenGL to be what its name suggests: a graphics library. Special purpose! In case I want to misuse the GPU for other tasks, there should be a separate API.

CatDog

bobvodka
06-22-2008, 04:08 PM
Well, assembly can be called an "API" in a very basic way. The instructions are not executed directly with most CPUs, but are broken down in more primitive commands. The x86 architecture can be seen as an "API", with new instructions being "extensions". But this is a bit absurd, from my point of view. An API is way more abstract then that.

No, no it really can't be called an API;


API (http://en.wikipedia.org/wiki/API)
An application programming interface (API) is a set of declarations of the functions (or procedures) that an operating system, library or service provides to support requests made by computer programs



Programming Language (http://en.wikipedia.org/wiki/Programming_language)
A programming language is an artificial language that can be used to control the behavior of a machine, particularly a computer.[1] Programming languages are defined by syntactic and semantic rules which describe their structure and meaning respectively. Many programming languages have some form of written specification of their syntax and semantics; some are defined only by an official implementation.


Please note how they are totally different things and how the latter applies to languages (assembly, C, C++, C#, Java etc) and the former doesn't.

Mars_999
06-22-2008, 05:24 PM
Ok back on track, and OpenGL should stay a graphics library IMO, and if they want GPGPU stuff make a add on library that integrates to GL, and call it OpenGPGPU or something. That way they can be used separately or together. BTW 100 pages of bickering!!! Keep it rolling. ;P

zed
06-22-2008, 06:22 PM
Is this the reason for GPGPU? If true, my feeling is, that the new flexibility of current graphics cards is heavly misused to act like extra computers. Just because it can be done.
not 'just because it can be done' (though i admit there is prolly some programmer wankery in there) but because it can offer performance an order or 2 or even 3 greater in magnitude than is available on a cpu

look at cell (used in the ps3)
its now used in the fastest computer in the world
http://www.pcworld.com/businesscenter/article/147222/looking_back_on_the_top500.html
btw amazing the growth, 15years ago the fastest 500 computers combined total were 0.1% of the speed of todays single fastest computer!!

btw 1 petraflop had to write out to get an idea 1,000,000,000,000,000

its obvious that the lines betwenn gpu+cpu are blurred + becoming moreso each year (like the cell), its very possible that the next families from nvidia/amd will be celllike hybrids (nvidia must be keen of taking a chuck of the cpu dollars as its a far larger market than gpus)

i can take a C program from 40 years ago, compile it + run it on my machine + send that exe to anyone with a pc + they can run it also.
now with graphics cards each generation more has become available, requiring the programmer to rewrite there programs which is a PITA.
wouldnt it be great to have a graphics language (C++ choosen as an example) that is gonna last for decades.
with C++ i can use boost or STL libraries for funcs that others have written (using boost or STL normally doesnt incur much of a performance penalty).
replace C++/boost with opengl-graphics-language/opengl3.0


well 51 days till siggraph

knackered
06-22-2008, 07:15 PM
....and another thing.....why oh why do people say Tannoy when they mean Public Address System? For heavens sake, Tannoy is a brand name. Same goes for Hoovers.

Oh look, a squirrel.

cass
06-22-2008, 07:31 PM
We say "PA system" and "vacuum cleaner" in the US. :)

Mars_999
06-22-2008, 10:11 PM
We say "PA system" and "vacuum cleaner" in the US. :)



LOL, I personally like my Kirby, built like a tank! ;)

Carl Jokl
06-23-2008, 01:59 AM
It is clear that OpenGL 3.0 even if it finally arrives this summer with be over a year behind schedule. Can anyone summise why this is without me having to dig through all 100 pages of ranting?

I must say that it is all good and well putting down Microsoft (and I am not a Microsoft fan) and I could imagine many like Open GL because there isn't the Microsoft tie in as is the case with Direct X. The problem is that when the non Microsoft alternative systems either screw up or are massively behind schedule or slow to make progress it doesn't help in promoting them.

As regards the GPGPU. That is a bit of a contradition in terms at least as far as naming goes. I think it would be better called a HPPU (Highly Parallelised Processing Unit or something like that spelled correctly).

I could see from a technical standpoint potential advantages of intergrating GPU cores together with CPU cores on a single die. It seems after all that it is usually the graphics bus which demands the highest bandwidth from the CPU anyway except maybe in servers. The problem with that is more business related than technical. Graphics cards allow mixing and matching of different CPU and GPU vendors which would not be possible with everything intergrated on one chip.

Zengar
06-23-2008, 02:18 AM
It is clear that OpenGL 3.0 even if it finally arrives this summer with be over a year behind schedule. Can anyone summise why this is without me having to dig through all 100 pages of ranting?

No reason. Well, there must be one, but they don't tell us.



As regards the GPGPU. That is a bit of a contradition in terms at least as far as naming goes. I think it would be better called a HPPU (Highly Parallelised Processing Unit or something like that spelled correctly).


There is already that term "stream processor", which is pretty nice IMHO.

zed
06-23-2008, 02:59 AM
ive just been out for a run + im well + truly, knackered
u talking about the suck or blow variety?
well off to bed, another exciting day awaits tommorrow (they all are when youre unemployed)