Dx11 catchup

Well, all I have to say is that the next release of openGL (and GLSL 1.6) must catch up with current dx11 state, or we will experience (maybe) something worse than when dx7 came out :mad:

-Tesselate Shaders (the two programmable + FF)
OR
-Parallel triangle output geometry shaders (and ofc 16k GL_MAX_GEOMETRY_OUTPUT_COMPONENTS)
// can you imagine i cant output 32 triangles with 2 tcoords?
// If I was rendering a water surface with a texcoord, tangentMatrix, wave direction, screen space vert position, reflection space vert position, Eye normal (I messed up the name, anyway its the E part of the specular)
ON SOME HARDWARE I WOULDN’T BE ABLE TO OUTPUT ANY ADDITIONAL TRIANGLE AT ALL

-Multithreaded rendering
-Object Oriented GLSL (supreme to UberShader)

It’s not so hard to implement the features that the other API already has, because there is already underlying hardware (fermi and HD) and you need to get your pointers/assembly right.

Seriously next time we need to be ahead of dx12 and have features they dont, which will visually attract gamers.
But this is not likely to happen because the vendors need to put it in their hardware, and who listens to openGL when DX is EVERYWHERE.

P.S. I don’t know how hard the tessellate shader is going to be, but if we could have at least a tesselate_displacement and a phong_tesselation extension (like AMD tesselate) it would be great.

also i would like to say that the layout() and extension about separate shaders, do not provide the functionality SM 5.0 does

Let’s see. When DX10 came out (even before!), OpenGL had the DX10 goodies first. Why? NVidia had a card on the market, and NVidia focuses on OpenGL support.

What’s different this time? Hmmm…

I’m expecting GL support by GDC, if not for GL (EXT/core) then definitely for NV (which [b]hopefully[/b] will be out by then).

What first needs to be done is to sort out the ins and outs of what openGL should be like in a few years, like texturing, framebuffers and stuff like that, in my opinion i think it needs to be unified and generalized a bit more with they way stuff like VBOs are being done now.

Dx12 is years away and if you consider that DX10 is still not really a requirement even today it’s not a big problem,e.
So relax, tessellation will come, probably next GDC as an ext and then go core somewhere along in gl 3.5 or so, which is not that far into the future.

And i don’t think openGL should get creative until at least OpenGL 4.

Let’s see. When DX10 came out (even before!), OpenGL had the DX10 goodies first. Why? NVidia had a card on the market, and NVidia focuses on OpenGL support.

Allow me to disagree. It is true that they are supporting OpenGL on the driver level, but it is a diferent story when you look at their tools.

FXComposer does not support GLSL. They only plan to add basic OpenCL support to Nexus.

On their tools they only give proper support to their technologies (Cg, CUDA) and DirectX.

They are on their own right to support whatever technologies they feel it makes business sense, but for me as an OpenGL fan, it feels not quite right.

Well, all I have to say is that the next release of openGL (and GLSL 1.6) must catch up with current dx11 state, or we will experience (maybe) something worse than when dx7 came out

Please. I don’t know what you’re talking about with DX7, but DX11’s features are pretty thin. There are basically 3 features of actual note: tessellation, multithreading, and compute.

Compute is not something OpenGL is going to ever handle. It’s being taken care of by OpenCL.

Tessellation of some form will be available eventually. Multithreading will be harder, simply due to the complexity of specifying what the feature means.

In short, there will be another revision of OpenGL in the near-ish future. It will have appropriate extensions and core features for this stuff. Stop worrying about it.

Personally, I don’t care about DX11 features. I’m more concerned about things that will be useful on DX10 and 9 hardware: shader separation, binary shaders, sampler state separation from texture objects, and so on. API cleanup work that OpenGL has long been needing.

-Object Oriented GLSL (supreme to UberShader)

Um, no. There is absolutely no point in adding classes and polymorphism to GLSL. Until you have pointers and recursion (the main things missing from GLSL that are present in C), it is totally useless to add classes.

When DX10 came out (even before!), OpenGL had the DX10 goodies first. Why? NVidia had a card on the market, and NVidia focuses on OpenGL support.

Again, NVIDIA != OpenGL. NVIDIA extension support != OpenGL support.

I’m glad that you operate in an environment where you can dictate what hardware your users use. That’s not everyone. That’s not most people. That’s not even the majority. And therefore, what NVIDIA does is not the same as what OpenGL does.

Oh, and it’s funny: DX11 hardware is available now. So where is that OpenGL “support” you’re talking about. Oh that’s right: NVIDIA is 6 months behind ATI. So any OpenGL “support” is behind the hardware.

FXComposer does not support GLSL. They only plan to add basic OpenCL support to Nexus.

Of course not. GLSL is supported by ATI too. You can’t go around making tools that work on competitor’s hardware. Same with OpenCL. No, you use your toolchain to provide vendor lock-in. That’s how businesses work.

most of games in 2009 was issued under dx9 ).
dx9 seems to have longest life among all dx releases :slight_smile:

and i see dx has much more bigger support from microsoft & nvidia.
at least sdks & docs have up to 10 times bigger size.
most opengl’s examples & tutors are outdated.

now nexus comes for vs 2008 with hlsl support first.

Alfonse, get off the caffeine already. You take things way too seriously.

OpenGL support via any means, core, extension or otherwise, is OpenGL support (not OpenGL “core” support, OpenGL “support”).

Any new functionality may not support all cards back to Radeon 9700 or GeForce FX (or even more than one vendor’s GPUs for that matter), but it’s still OpenGL support if you can get to it via OpenGL. That’s one of GL’s strengths.

Even if we had core/ARB/EXT support for tesselation now, it’d be basically the same as having only ATI extension support now, because nobody can hack HW tessellation at present but ATI. ATI was first to market and I applaud them. So where’s the OpenGL support, ATI?

Which was my “whole point”!, and apparently flew completely over your head. There is no GL support, via vendor extension or otherwise right now, because NVidia doesn’t have a card on the table, and ATI isn’t yet devoting the same resources to GL. ATI, please take this as a gentle nudge. We’d love to use your products (and we use loads of boards per customer), but between this and driver quality issues, we’re having a hard time making the business case for it.

And Alfonse, if you do like GL, I’d encourage you to tone it back a bit and treat others the way you’d like to be treated (I’m assuming you’re not a masochist). Sometimes you spread so much judgement and “one-right-way” ego on these forums that new folks reading you might just walk off and use D3D. If that’s not really your goal, take a chill pill man and stop ripping everybody around you a new one for not thinking exactly like you. I’m thankfully most GL devs don’t treat others like you, or I sure wouldn’t hang out here. So relax and lean off the one-right-way ego, and let’s grow the user base here, not shrink it.

Even if we had core/ARB/EXT support for tesselation now, it’d be basically the same as having only ATI extension support now, because nobody can hack HW tessellation at present but ATI.

No, it wouldn’t. Because if we had core or ARB support for tessellation now, and you wrote code for it now, it would work on ATI hardware and any future NVIDIA hardware. That’s the whole point of putting something in the core or giving it an ARB extension designation. Whereas if I wrote code for NV_conditional_render, it doesn’t automatically work on ATI hardware that supports conditional render. You have to use the core feature to get at it for ATI cards.

The only objective reason to use OpenGL over D3D is the fact that it is cross platform. It works on Windows, MacOSX, and Linux, as well as ATI and NVIDIA. If you want to do graphics on all of these platforms, OpenGL will do the job. Indeed, OpenGL is the only choice.

Now, if you want to use NVIDIA_GL (the entire NVIDIA ecosystem, from Cg to NV assembly to bindless to whatever), you are able to do this through the OpenGL extension mechanism. But do not convince yourself that this is anything even remotely cross-platform. It isn’t OpenGL; it’s just NVIDIA’s API that they expose through OpenGL.

Making any extension does not constitute “OpenGL support.” EXT_separate_shader, despite the good intention of the extension, is not “OpenGL support.” It only works with the built-in values. It clashes with the design ideals of GLSL 1.30+.

You can only call an extension “OpenGL support” if it exposes something in a way that works with OpenGL (rather than against it, like Cg), and can reasonably be implemented by someone else. If NVIDIA were to come out with a tessellation extension that only works with Cg and the NV assembly, this does not constitute “OpenGL support.”

hmm… nice discussion here.

but why is 3.5 planned and not 4.0???

cmon there was only 2.1 i think?

I cannot imagine we will see core tessellation in the OpenGL 3.x series because this would mean OpenGL 3.0, 3.1 and 3.2 could run on DX 10 hardware, but OpenGL 3.3 and above could not. As nvidia does not even have tesselation capable hardware yet, I’m sure they will do everything to prevent tessellation becoming a core feature at this moment (and as opposed to DirectX they are likely to be able to prevent such things, because they have a strong voice in OpenGL specification creation and OpenGL definitely needs nVidia onboard). However, I think an ARB extension could be a serious option for the 3.x series. As Alfonse said, an ARB extension would give the benefits of creating working code now for Ati hardware that works also on future hardware from all vendors.

On the other hand: nVidia is expected to have tessellation capable hardware in March or so, if all goes well from now on (which has to be seen, Fermi has had a lot of problems so far, it is not guaranteed no more problems will arise I guess). In March we might also see a new OpenGL version, if the same 6 month schedule is followed that we’ve seen since OpenGL 3.0 was released. With nVidia also having tessellation capable hardware (or more general: DX11 capable hardware), the doors are open for a new OpenGL version that gets on par with DX11.

Personally I’m looking forward to have ARB or core support for tessellation. I also would like OpenGL to gain multicore rendering support, I consider that as very important feature. I think it is too early for OpenGL 4. They might also decide to create OpenGL 3.5, with the version jump indicating newer hardware to be needed, but basically it would be a minor update from OpenGL 3.2, with just some stuff added to get on par with DX11.

OpenGL 4 could still be the API rewrite they promised a long time ago. I believe OpenGL 3.x as we see it now was just necessary to prepare us for an API change. I think an API rewrite is still a viable option, but since the uproar about OpenGL 3.0 they are just very quiet and don’t talk about such a thing until it is done and ready to be released (August 2010??). In the meantime they make sure, with the 6 month release schedule, that OpenGL is and stays ready for an API switch (ready means: on par with hardware capabilities).

I don’t believe there will be an API rewrite, ever.

OpenGL 3.0 has the deprecation system introduced to be able to change the API gradually. Instead of one monolithic rewrite they figured changing the API piecewise would be much easier to accomplish and to have vendors support it.

Also i don’t think nVidia would deliberately prevent an extension such as tesselation from being created, even if they can’t support it right now. It would be extremely stupid, because with such a feature they know that they will have to support it eventually. I think it simply takes time to create a proper spec. They would actually benefit from creating the spec NOW because that gives them time to implement it in their drivers and have it ready when their hardware ships. Whether ATI supports it now, earlier than nVidia or ever doesn’t matter, because the extension is really not that important. Even in DirectX-land apart from some tech-demos nobody uses it so far.

Every spec that deals with shaders is complicated in nature, and adding one (or two) entirely new shader stages has to be done carefully. And as far as i can see the ARB wants to do things right from the start, everything else just means more work and more headaches in the long run.

Jan.

No, but at a point you come to a threshold where you have to take a larger step in order to say “here’s our new baseline”.

Just looking at the 3.x versions released so far it feels like a steady march towards an definite goal and once that is done you only have to state that “this is no longer 3.x but 4.0”.

I’m not saying nVidia will prevent making a spec for tessellation. What I’m saying is that I think they will prevent it becoming core until their hardware supports it.

This is one good reason, but not the only objective reason.

Another is that you don’t want the continuing costs/burdens/security problems/update nightmares/stability issues/etc. that go along with maintaining an embedded graphics system on Microsoft Windows. It’s a completely pointless headache.

Microsoft’s “Where Do You Want To Go Today” has gotten lost, and they ain’t going where we want to go. Except with D3D, but their aims are crystal clear. They alone control it, and they use it as a stick to make their user base jump through hoops (buy new OSs) when “they” want them to. Can you say, cattle prod? I feel sorry for ATI, NVidia, and Intel getting stuck in the middle of that mess.

So rejoice. There are multiple reasons to chose OpenGL, and we’ve only just touched on two of them here.

but why is 3.5 planned and not 4.0???

Who said it isn’t? And we’re only up to 3.2, so I don’t know where you’re getting 3.5 from.

If I were in charge of the ARB, I’d be looking at making 2 specs: 3.3 and 4.0. 3.3 would essentially be the important API improvements in core (there would be appropriate core extensions available too, for pre-DX10 hardware, though that’s becoming increasingly scarce). 4.0 would strictly be for DX11 hardware.

They could probably get away without the 3.3 from my plan, just releasing the API improvements as core extensions. But putting things in core has often been a way of kicking certain implementers (coughATIcough) in the pants to get implementing. And I want proper shader separation in every driver on the planet ASAP.

Another is that you don’t want the continuing costs/burdens/security problems/update nightmares/stability issues/etc. that go along with maintaining an embedded graphics system on Microsoft Windows. It’s a completely pointless headache.

And yet, almost every game developer is perfectly willing to put up with this “pointless headache”. Indeed, even cross-platform games that get released on MacOSX or Linux often have a D3D path that they use on Windows. That’s a maintainance nightmare. Yet they do it.

In fact, there’s a company with significant ARB influence that specializes in doing just this type of port (they’re one of the main motivators for extensions like ARB_provoking_vertex and ARB_vertex_array_brga. These make it easier to use the same vertex data between GL and D3D). You couldn’t build an entire company around doing that kind of thing if D3D were a “completely pointless headache,” or anything else you describe it to be.


And yet, almost every game developer is perfectly willing to put up with this “pointless headache”. Indeed, even cross-platform games that get released on MacOSX or Linux often have a D3D path that they use on Windows. That’s a maintainance nightmare. Yet they do it.

In fact, there’s a company with significant ARB influence that specializes in doing just this type of port (they’re one of the main motivators for extensions like ARB_provoking_vertex and ARB_vertex_array_brga. These make it easier to use the same vertex data between GL and D3D). You couldn’t build an entire company around doing that kind of thing if D3D were a “completely pointless headache,” or anything else you describe it to be.

Many people criticize DX just because it is a Microsoft product tied to the the Windows platform. Many of whom never even cared to learn about it.

It is true that its COM based API and Hungarian notation is really bad compared how nice OpenGL looks like, but if you ever looked into OpenVMS code, it is actually not that bad. :slight_smile:

And D3D is much more than just OpenGL is. It is a part of a complete gaming framework.

Not to forget that many gaming studios want to deliver their games to as many people as they can, and not everyone has a ATI or NVidia card.

Some of the people that complain about Microsot assume that everyone else but Microsoft is using OpenGL and that is also not true. Even Apple used to have its own, remember Quickdraw3D?

The game consoles have mostly OpenGL like APIs, but they are not OpenGL. The only I know that really has proper OpenGL support is the PS3 and there you have to use CG for your shaders. And many PS3 AAA just use the native graphics API instead of OpenGL.

The gaming industry is what sells most of the graphic cards nowadays, and most developers tend to use whatever APIs the target system has. If they need to have it on different platforms, they get subcontractors to do the porting. This is how the industry works and it won’t change.

We should all the thanked to the mobile industry and in special to Apple, because now OpenGL is becoming relevant again for gaming development.

If ATI or NVidia lower their support for OpenGL, then the API will dye on Windows, regardless how important it is in other platforms.

Yes, of course. PC game devs have to, since their users choose the hardware, and the devs just have to deal with it or lose a sale.

As developers of embedded systems, we are definitely in an unusual (and ideal) situation, where we pick the hardware/software for our customer because it’s a full-system solution. They just don’t care which chips, boards, hard drives, network cards, power supplies, cases, GPUs, driver versions, OSs, APIs, dev languages, debugging tools, etcetc. we use. Their requirements on us are that the system operates per requirements with a specified interface to the outside world with a specific uptime and certain initial and continuing cost limits. So whatever of the above list we want to use to get there is fair game.

But then, most embedded systems that I am aware of, make use of 3D chips that follow the OpenGL ES standard not the usual OpenGL one.

[quote=pjmlp]

The gaming industry is what sells most of the graphic cards nowadays, …

I’d like to challenge this assertion. I believe it was true maybe five years ago, but no longer true today. I’ve heard this as an excuse (games sell cards, games use D3D, so no reason to put a lot of effort into OpenGL) from employees at a major chipmaker whose OpenGL support is so crappy that most developers will revert to software OpenGL rather than use that vendor’s crappy drivers.

With the exception of MMOs and flight sims, a large number of PC games nowadays begin life as a console game. The game is designed and developed on the 360 and then later ported to the PC and PS3. PC sales alone aren’t enough to support the cost of modern AAA games. (http://www.shacknews.com/onearticle.x/53047).

There’s no longer any reason to buy the latest and greatest graphics card to play the latest PC game. You can just buy that same game and play it on your console on your 52" plasma and 5.1 surround sound.

I think marketing departments of the major chip vendors need to re-evaluate who is buying all their cards/chips. Perhaps now it’ll end up in OpenGL’s favor and maybe lead to a renewed focus on the API. Or maybe that’s just too optimistic.