PDA

View Full Version : Official feedback on OpenGL 3.1 thread



Khronos_webmaster
03-24-2009, 10:47 AM
The Khronos(tm) Group announced today it has publicly released the OpenGL(r) 3.1 specification that modernizes and streamlines the cross-platform, royalty-free API for 3D graphics. OpenGL 3.1 includes GLSL(tm) 1.40, a new version of the OpenGL shading language, and provides enhanced access to the latest generation of programmable graphics hardware through improved programmability, more efficient vertex processing, expanded texturing functionality and increased buffer management flexibility.

OpenGL 3.1 leverages the evolutionary model introduced in OpenGL 3.0 to dramatically streamline the API for simpler and more efficient software development, and accelerates the ongoing convergence with the widely available OpenGL ES mobile and embedded 3D API to unify application development. The OpenGL 3.1 specification enables developers to leverage state-of-the-art graphics hardware available on a significant number of installed GPUs across all desktop operating systems. According to Dr. Jon Peddie of Jon Peddie Research, a leading graphics market analyst in California, the installed base of graphics hardware that will support OpenGL 3.1 exceeds 100 million units. OpenGL 3.0 drivers are already shipping on AMD, NVIDIA and S3 GPUs.

Concurrently with the release of the OpenGL 3.1 specification, the OpenGL ARB has released an optional compatibility extension that enables application developers to access the OpenGL 1.X/OpenGL 2.X functionality removed in OpenGL 3.1, ensuring full backwards compatibility for applications that require it.

OpenGL 3.1 introduces a broad range of significant new features including:


Texture Buffer Objects - a new texture type that holds a one-dimensional array of texels of a specified format, enabling extremely large arrays to be accessed by a shader, vital for a wide variety of GPU compute applications; Signed Normalized Textures - new integer texture formats that represent a value in the range [-1.0,1.0]; Uniform Buffer Objects - enables rapid swapping of blocks of uniforms for flexible pipeline control, rapid updating of uniform values and sharing of uniform values across program objects; More samplers - now at least 16 texture image units must be accessible to vertex shaders in addition to the 16 already guaranteed to be accessible to fragment shaders; Primitive Restart - to easily restart an executing primitive - to efficiently draw a mesh with many triangle strips for example; Instancing - the ability to draw objects multiple times by re-using vertex data to reduce duplicated data and number of API calls; CopyBuffer API - accelerated copies from one buffer object to another, useful for many applications including those that share buffers with OpenCL(tm) 1.0 for advanced visual computing applications.

Groovounet
03-24-2009, 11:04 AM
Oo!

Unexpected release ... so far so good, I was waiting for uniform buffers, I'm glade it's here! Well still waiting for the actual specifications now ...

It's not much but if the idea is to release every 6 months ... ***Youhou***

Khronos_webmaster
03-24-2009, 11:11 AM
The specifications are now available in the OGL registry: http://opengl.org/registry/

Chris Lux
03-24-2009, 11:23 AM
i just looked over the specs. very nice and unexpected clean ;). if OpenGL 3.2 gets the direct state access stuff it will be great.

great work!

p.s. one question to the UBO extension: is it possible to have multible uniform buffers at once?



uniform lights
{
...
}

uniform material
{
...
}


this is a question i got from the first fast look at the spec. how does this look like on the host side with multiple UBOs for the buffers?

3B
03-24-2009, 11:47 AM
How does the ARB_compatibility extension work?

If the extension is supported, a 3.1 context has all the deprecated stuff still there anyway? Or does an application need to explicitly request backwards compatibility somehow?

barthold
03-24-2009, 12:13 PM
> If the extension is supported, a 3.1 context has all the deprecated stuff still there anyway?

That is correct. Just check for GL_ARB_compatibility in the extension string if you need to use any of the deprecated features.

Chris Lux
03-24-2009, 12:59 PM
this seems odd. so if GL_ARB_compatibility is in the string all functionality is still there? wouldn't it be more logical to use a backward compatibility flag during context creation like with the forward compatibility flag for OpenGL 3.0?

Rob Barris
03-24-2009, 01:08 PM
i just looked over the specs. very nice and unexpected clean ;). if OpenGL 3.2 gets the direct state access stuff it will be great.

great work!

p.s. one question to the UBO extension: is it possible to have multible uniform buffers at once?



uniform lights
{
...
}

uniform material
{
...
}


this is a question i got from the first fast look at the spec. how does this look like on the host side with multiple UBOs for the buffers?

Should be completely doable but there are hardware limits to be aware of. Also, not that UBO extension was written so that it could apply to some pre-GL3 hardware as well such as Radeon X1000 and GeForce 7 - though availability of the ext on those parts is up to the vendor to decide. On those parts you could quite likely be limited to a single UBO bound per draw - this is my initial guess. But on GL3 hardware the limits are higher... 16 I think ?

Rob Barris
03-24-2009, 01:12 PM
this seems odd. so if GL_ARB_compatibility is in the string all functionality is still there?


wouldn't it be more logical to use a backward compatibility flag during context creation like with the forward compatibility flag for OpenGL 3.0?

Just a different way of doing it I suppose. But the deprecation model as written includes the concept of outgoing features being pulled back out to extension land. I have no prediction on which vendors will offer that ext and for how long. Ultimately it's going to be developer / app uptake that will affect the lifetime of outgoing extensions in the market.

Note that this process could occur again in the future, so the context-creation-flag approach might not scale as well. People know and understand the extension model already, it's just being used in a new way here.

ector
03-24-2009, 01:54 PM
Rob, this looks like a fine release. Uniform buffers are gold.

I now only have two big beefs left with OpenGL:

* Can you please push HARD for decoupling vertex and fragment shaders, so that you don't need to explicitly link them together and can mix & match as needed, like with ARB_programs or in DX?
* Full direct state access for all 3.1 features (and commitment to keep supporting it for every future feature, eventually phasing out indirect state access) would be awesome. Bind-to-modify is such a ridiculously horrible idea that it's absolutely unbelievable that it's still with us in 2009.

Oh yeah, and texture filtering state should be per sampler, not per texture. But you've heard that one to death and it's likely not a big performance win.

Stephen A
03-24-2009, 02:55 PM
ector++

UBOs were much needed and direct state access is what OpenGL *really* needs.

Rob Barris
03-24-2009, 04:59 PM
Can't make any specific promises since this is a group effort and it is too soon to say really, but many of the suggestions on the last couple of posts carry some noticeable weight behind them in terms of "working group interest level" for the next major revision. That said we are trying to stay schedule driven and we might start out a plan with 5 major things on the list and then ship with 4 in order to avoid extensive schedule creep, so that's why we don't carve it in stone here..

UBO was one that caused a bit of schedule elongation on the 3.1 release but we can see that it was worth the slight added wait.

scratt
03-24-2009, 07:21 PM
Will pull down and read the specs today, but the bullet points above seem to indicate you've covered most of the things I am immediately concerned about. Thanks.

Just have to wait for Apple to roll it out then.... ;)

Hampel
03-25-2009, 03:00 AM
Haven't got the chance to download the specs yet, but what about the fixed function pipeline? Is it still in there or have it been eventually removed?

nosmileface
03-25-2009, 03:03 AM
Nice work! I hope vendors will make us happy with their good and stable drivers (which behave identically) eventually. At least for 3.1 part. And *maybe*.. this thing will return OpenGL to game development area.

I'm the guy who cares about games on linux (well, I hope to see them in future) and without good competing GAPI it's not even possible. I mean the obvious way it will happen is - make a game for window$ and port it to linux/macos since API is OpenGL anyway. In other words, I hope we will see games for window$ on top of OpenGL in future.

Once again, great job!

ZbuffeR
03-25-2009, 03:11 AM
Nvidia already released 3.1 drivers, nice work !

Groovounet
03-25-2009, 04:06 AM
Supposed to be OpenGL 3.1 drivers but it's definitely not yet.
Anyway, drivers are on their ways for both nVidia and ATI.

Auto
03-25-2009, 08:07 AM
Nice one - agreed with ector though: direct state access in 3.2 pls.

Good stuff.

ZbuffeR
03-25-2009, 08:17 AM
From http://developer.nvidia.com/object/opengl_3_driver.html#notes :

This driver implements all of GLSL 1.30 and all of OpenGL 3.0, and all of OpenGL 3.1 and GLSL 1.40, except for the following functionality:

* The std140, column_major and row_major layout qualifiers in GLSL 1.40
* The API call BindBufferRange() ignores the <offset> and <size> parameters. In other words, BindBufferRange() behaves the same as BindBufferBase()
Can't test on my oldish card, but this sounds already quite complete.

Eddy Luten
03-25-2009, 08:39 AM
Just dropped by out of self-inflicted exile to say: GOOD JOB, KHRONOS

Stephen A
03-25-2009, 09:39 AM
Can we please have an update for the man sources? (https://cvs.khronos.org/svn/repos/ogl/trunk/ecosystem/public/sdk/docs/man/)

They were made public a few months ago (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=245768) and we are using them to generate inline documentation for the .Net bindings. Unfortunately the public repository seems stuck with OpenGL 2.1.

Edit: Also gl.spec (http://www.opengl.org/registry/) doesn't seem to list any "version 3.1" functions, even though "3.1" has been added to the version string. Any timeframe for the update (so we can plan our release accordingly)?

spasi
03-25-2009, 09:58 AM
Great job Khronos!

I would just like to add that the EXT_copy_buffer extension is missing from the OpenGL registry. I wasn't able to find token values for COPY_READ_BUFFER and COPY_WRITE_BUFFER in the updated header files either (used in the new CopyBufferSubData).

skynet
03-25-2009, 11:34 AM
I'm horrifed about ARB_compatibility.

Will that pre-GL3.0 zombie survive forever?

It means: even though I request a 'pure' GL3.1 context, all the old cruft will be still in the driver!? There won't be any of the performance benefits that a lean-and-mean 3.1 driver promises.
ARB_compatibility undermines the positive effects the deprecation model should have brought. I demand a possibility to turn it off some way (maybe via a new flag in wglCreateContextAttribsARB). I think, the ones who really need GL_SELECTION (and all that other old suff) in a GL3.1 environment should be forced to deliberately activate it - and then pay for it.

Why not divert the driver development at this point? Why not provide a GL3.0+extensions and separate GL3.1+new_extensions driver?
You want developers to take the pain and rewrite their engines for GL3.x? Fine, then reward them with better performance!


Another thing I'm missing is a proper glPushAttrib/glPopAttrib replacement. The push/pop mechanism made it easy to isolate independend rendering code. There should be some new and efficient way to set/change/restore state in GL3.x without having to do multiple calls to glGetXXX().

Dark Photon
03-25-2009, 12:17 PM
Many thanks to Khronos/ARB and NVidia for their great work on OpenGL 3.1 design and drivers. Impressive comeback after 3.0! Whatever you're doing different now, please stick with it!

Chris Lux
03-25-2009, 12:58 PM
I would just like to add that the EXT_copy_buffer extension is missing from the OpenGL registry. I wasn't able to find token values for COPY_READ_BUFFER and COPY_WRITE_BUFFER in the updated header files either (used in the new CopyBufferSubData).
+1 for updated headers.

ScottManDeath
03-25-2009, 01:43 PM
Btw, are there any plans to have an RSS feed for the registry?

I tried building one with an external website, but it was all screwbally.

Don't Disturb
03-25-2009, 01:58 PM
ARB_compatibility is absolutely necessary, and yes, on NVidia hardware at least, the pre-GL3 zombie will survive a long time. Other vendors don't have to support ARB_compatibility so they're not being held back. A lot of time and money has been invested in writing pre-GL3 code, expecting that will be thrown away just to use uniform buffers is unreasonable.
Backwards compatibility is one of the main wins for GL vs D3D, throwing away that advantage is a definite no-no.

ZbuffeR
03-25-2009, 04:46 PM
<GL3/gl3.h> : good idea, especially to prevent using the deprecated stuff, but why not <GL/gl3.h> ?
And where/when can it be downloaded ?

NocturnDragon
03-26-2009, 02:20 AM
Seeing how much of an improvement the Experimental C++ Bindings for OpenCL are compared to the C bindings. http://www.khronos.org/registry/cl/ (You need to be way less verbose).

It would be awesome if we had something like those for OpenGL3.

Ido_Ilan
03-26-2009, 02:27 AM
I'm horrifed about ARB_compatibility.

I'm must second this.
We develop medical device that heavily relies on OpenGL: volume rendering(ray casting/Slices), huge meshes, numerous points and lines. We have been refactoring the code to remove some deprecated stuff in anticipation for the next release, but all for vain(maybe not: it will help in the transition to DX), although we use Quadro FX4600(and above, cost many many $$$) the drivers are so shitty that you never know where is the problem.
We switched recently to a new driver/card and boom the application loses texturing or get stuck, replacing for an older card works, restore old driver works.
We need working OpenGL, I don't mind loosing selection/wide lines/push-pop/Fixed pipeline etc for a working implementation.
I must be honest for months others are pushing DirectX and I tell them no, no, GL3+ will be great!!!.
Well it doesn't seems so. I want two separate implementation: one for pre-gl3 and one for post gl-3. In OpenGL 3.1+ I don't want any old stuff and I want drivers to be rock solid.
I'm staring to think Korval was right from the start, OpenGL is gone.
If something dramatically does not happens soon everyone will switch to DirectX: CAD/Medical and Military. Even CAD users need solid drivers(see AutoCAD).

Ido

Heiko
03-26-2009, 03:18 AM
Well, perhaps the vendors will be releasing something like `legacy drivers' in the future? Say every once in a while they release a driver that includes the ARB_compatibility extension, but their newest lean and mean driver won't contain it (like they do for old graphics cards, every once in a while a new legacy driver is released for old graphics cards, but the mainline drivers don't have support for them). My guess is that the old code won't need every newest driver anyway.

I think that could be a perfect way to phase out the pre-GL3 code and focus on an OpenGL 3.1 driver that is clean.

Personally I'm quite happy with the 3.1 release, I didn't expect the deprecated stuff really to be removed from this release. The uniform buffer is also quite a nice feature. Besides that: 9 months after the previous release, who would have expected that after what happened with OpenGL 3.0?

My guess (hugely based on hope) is that we will see an OpenGL 4.0 version that will be the new rewritten API sooner than most expect. The OpenGL 3.x line is needed for transistion to a new API. They just couldn't make that huge step at once.

Here is a nice blogpost written by Paul Martz on the topic:
http://www.skew-matrix.com/bb/viewtopic.php?f=3&t=4

Y-tension
03-26-2009, 03:46 AM
Cool! Great! But just 1 question: Why not geometry shaders?

Groovounet
03-26-2009, 04:40 AM
Because geometry shaders are going to be deprecated in few years. It's just a good bad idea.

When I read how nVidia push with "WE WANT BACKWARD COMPATIBILTY", I don't think this dreamy rewritten OpenGL will exist any soon.

Groovounet
03-26-2009, 06:12 AM
This extension / feature seams to have been rushed released ^_^

skynet
03-26-2009, 07:22 AM
ARB_compatibility is absolutely necessary
No, its not. You want to use ARB_uniform_buffer_object in immediate mode? No problem, ARB_uniform_buffer_object written against the GL2.1 specs. Just open a GL2.1 context and use them together.


A lot of time and money has been invested in writing pre-GL3 code, expecting that will be thrown away just to use uniform buffers is unreasonable.
Just because GL3.1 came out, it doesn't mean, the 2.x (and even 3.0) driver and functionality automatically goes away. Your existing software will function as always. If you want to stay with old funcionality, just request a 2.1 context. There's absolutely no problem. But please don't penalize those that want to upgrade their software to benefit from faster and more stable drivers.


Backwards compatibility is one of the main wins for GL vs D3D,
IMHO backwards compatibility has become the major drawback of OpenGL. But this has been discussed in lengths already...

glfreak
03-26-2009, 08:26 AM
It's really a stripped version of GL that gets rid of all CAD stuff. That's good if it makes GL implementation more stable and easier.


Because geometry shaders are going to be deprecated in few years. It's just a good bad idea.

Why? It's now standard feature of the modern graphics pipeline.
Why not there?

Where's direct state access?

Now we have no reason not to prefer Direct3D.

Rob Barris
03-26-2009, 08:39 AM
IMO the distinction between making the geometry shader feature core or not would be more significant, if there were vendors that were avoiding implementation of the extension. i.e. if you as a developer wanted to use it but found out that a particular IHV had not implemented it, this could pose a real problem. But my understanding is that it is readily available on both AMD and NVIDIA implementations. So since the actual hard work of implementing it is complete, I would expect that extension to be around as long as the feature still exists in the hardware. Whether that interval is "forever" or "a couple of generations" I do not know.

Heiko
03-26-2009, 08:45 AM
IMO the distinction between making the geometry shader feature core or not would be more significant, if there were vendors that were avoiding implementation of the extension. i.e. if you as a developer wanted to use it but found out that a particular IHV had not implemented it, this could pose a real problem. But my understanding is that it is readily available on both AMD and NVIDIA implementations. So since the actual hard work of implementing it is complete, I would expect that extension to be around as long as the feature still exists in the hardware. Whether that interval is "forever" or "a couple of generations" I do not know.


I was under the impression that AMD did not implement the extension for the geometry shader yet. Am I wrong?

Check this news post on geeks3d about the extensions supported by AMD hardware:
http://www.geeks3d.com/?p=3522

There are no extensions for the geometry shader. Also:
Geometry Shader Texture Units: 0
Max Geometry Uniform Components: 0
Max Geometry Bindable Uniforms: 0

I think putting them into core would finally make AMD implement the geometry shader as well...

glfreak
03-26-2009, 08:54 AM
Then based on assumption that a feature may or may not exist in hardware, we would end up having no core, everything is extension,
unless GS is going under revising whether or not useful...

But with all these features deprecated, it makes it a bit more Direct3D 10...with more overhead and backward compatibility issues.

ebray99
03-26-2009, 09:14 AM
I think the idea of the ARB_compatibility extension is a good one. However, I think this extension should be revised to contain a new token for "enabling" compatibility. For instance:

// enable compatibility for the life of the application.
glEnable( GL_COMPATIBILITY_ARB );

If the extension is enabled, then old GL calls would work fine. If the extension is disabled, then the old GL calls would generate an error. Also, having header files that remove deprecated functionality would be a great thing!

My biggest concern with this particular extension has to do with the fact that modern code bases are large. If you're porting an old code base to GL3.1, and you miss something, ARB_compatibility would make it "just work", and you'd end up with a pretty serious bug in your program that could go completely unnoticed until your product has long since shipped to the masses. I'd like to be able to avoid these kinds of scenarios to ensure the maximum longevity of my applications.

Kevin B

martinsm
03-26-2009, 09:16 AM
Heiko: you are not wrong. ATI drivers do not expose geometry shaders to OpenGL.

Groovounet
03-26-2009, 09:18 AM
ARB_compatibility is absolutely necessary
No, its not. You want to use ARB_uniform_buffer_object in immediate mode? No problem, ARB_uniform_buffer_object written against the GL2.1 specs. Just open a GL2.1 context and use them together.


I agree and I thing it's just a marketing issue. "We want our software being the lastest OpenGL version compliant to write it on our software feature list". :p

@glfreak: 1 What need you do have of Geometry Shader? 2 What need do you have that the tessellator unit won't improve?

And the list is ... short!
+ geometry shader are not really efficient on current hardware, nVidia a least it seems better with ATI hardware.

glfreak
03-26-2009, 12:09 PM
What gonna happen to big software like Lightwave 3D, Maya, Softimage, Quake/doom engines, and many others? Are they gonna port it to the compatibility extensions?

OpenGL is supposed to be a CAD oriented graphics API, and now it's missing most important features.

I would like to see ARB taking care of implementing these deprecated features since they are heavy burden on IHVs that can make their implementation unstable, and let the IHVs implement the rest of core functionality. Remind you that most of the deprecated core can be implemented on top of the current specification core, the rest don't need hardware specific drivers either.

Just don't screw it up more please.

M/\dm/\n
03-26-2009, 01:19 PM
Why the compatibility thing?!!!! WHHHHYYYY?

I so loved the idea that strict GL3.0 has all of the legacy stuff stripped, and that in 3.1 there could be a separate driver and separate header files if need be. And now?

If the extension is there in the specs, it means there is no way we will be able to remove all the legacy stuff, EVER!

The forward compatibility flag and new context creation function plus an old 2.1 context was the right way to go. :(

skynet
03-26-2009, 01:23 PM
Arrrghh, I just can't hear it anymore!
THESE POOR CAD COMPANIES DON'T NEED TO WIGGLE THEIR LITTLE FINGER IN ORDER TO KEEP THEIR CURRENT SW WORKING. It will just work as always, even if the driver provides an _additionnal_ GL3.1 context. Nobody ever wanted pre-GL3.0 drivers to disappear from one day to another. And it won't happen in years, it would not have happened with Longs Peak either. You software developer are not forced to do anything just because a new GL spec appears.
We understand that; its ok.
We just DONT want to let pre-GL3.0 'features' to poisen the lean-and-mean GL3.1 driver. WGL_ARB_create_context allows us to specifically decide to use a GL3.1 context over an older version. By doing that I _deliberately accept_ that the deprecated features are gone. But I also _expect_ that the reduced driver complexity pays out in form of better stability and performance. With ARB_compatibility in background (which basically enables all old features by default) the driver _has_ to assume that I may use one of the old features and therefore can't enable any optimization that would be possible without the old cruft.

This is what actually real CAD companies think about OpenGL today: http://www.mcadforums.com/forums/files/autodesk_inventor_opengl_to_directx_evolution.pdf

Admitted, some of the arguments are a little off (they didn't evolve their rendering backend in ten years?!), but most of them are credible.

Lord crc
03-26-2009, 02:24 PM
*scratches head* So all the "goodies" from 3.0 are available as extensions in 2.1, and 3.0 still allows basically everything from 2.1. Now comes 3.1 which uses the deprecation mechanism to remove a lot from 3.0, except it doesn't, so in reality, the deprecation mechanism is currently just smoke and mirrors. Or?

The reason I stopped coding OpenGL stuff was because I never knew if it would run on anything but my machine (especially GLSL shaders). Has anything changed in this regard?

Rob Barris
03-26-2009, 02:52 PM
Take some time to consider the possibilities. For example, the default downloadable GL driver could be the base 3.1 version without ARB_compat. It's already commonly the case that vendors provide basic GL drivers as well as "workstation optimized" GL drivers as two separate things. Prior to 3.1, both such drivers had to have the whole enchilada, including all the legacy code. Post 3.1, this is no longer the case because the legacy functionality has been defined as optionally present.

The key message of 3.1 is highlighting / emphasizing the feature set you should code to going forward. It's yet to be seen what the adoption rate of 3.1 + ARB_compat will be - since that would only apply to ISV's upgrading their apps to use 3.1 features, while resisting elimination of legacy code. The answer for those apps might be "you need the workstation class drivers with the extra ARB compat support if you want to run that app." Not every such app ships to a wide audience, some are internal / custom apps.

This is all off the top of my head, but it's completely doable. Don't be distracted by the ARB_compat extension; focus on learning the 3.1 core feature set.

I think once you step back from the assumption that an IHV can only ship one flavor of their driver, you might see where I am coming from.

If you aren't in the group described above (migrating a legacy app up to 3.1 while wanting to continue using legacy func) then the story is very simple - focus on the core feature set and code to that. The spec is shorter, the coding choices are fewer. The driver will never spend any cycles wandering between fixed func and shader mode because your app won't be asking for that type of behavior any more.

M/\dm/\n
03-26-2009, 03:01 PM
It just takes one lazy game developer to use compat extension in their game and the good idea is ruined. We will have to support them forever, and no more lightweight gl3 drivers.

By the way, are those gl3.h headers available somewhere? And yes, why <GL3/gl3.h> not <GL/gl3.h> like <GL/glaux.h>, <GL/glu.h>

Lord crc
03-26-2009, 03:55 PM
I think once you step back from the assumption that an IHV can only ship one flavor of their driver, you might see where I am coming from.

I must admit I have no idea about how many flavors the various IHV keep, however I'm assuming that a driver is a non-trivial thing to write, and as such you'll want to minimize the number of parallel implementations. Is there really incentive enough for any significant number of IHV's to write optimized 3.1 only drivers?

In any case, 3.1 looks like a good step in the right direction. Most importantly however is, imho, that you managed to release it within the claimed timeframe.

Rob Barris
03-26-2009, 05:21 PM
Was thinking about this some more and just wanted to point out that all us developers want a few key things:

a - correctness - features work as advertised
b - performance - things run fast
c - simplicity - how much spec material to wade through.

Spec writing can directly influence 'c', but really only has an indirect effect on 'a' and 'b'. Improving 'a' and 'b' requires a healthy and active feedback loop between the providers of the implementation and its clients.

i.e. if you are running into problems with some code of yours that doesn't work right, or has some performance quirk, etc - the first stop really needs to be the dev relations department of the affected implementor. This is a process we participate in on an ongoing basis, it takes time and elbow grease but we get results that help our products work better.

Plainly most developers would prefer it if everything was perfect out of the box - that said, bugs do exist, so a developer that encounters a problem and doesn't circle back to the implementor with a bug report, breaks the feedback loop. But that loop provides the mechanism to really make progress on 'a' and 'b' for your app, you have to make that contact and speak up if something isn't right for you.

Lord crc
03-26-2009, 06:03 PM
Spec writing can directly influence 'c', but really only has an indirect effect on 'a' and 'b'.

True, but IMHO the effect on 'a' and 'b' shouldn't be downplayed. For instance, here's a concrete issue I've had: ATI drivers would behave in "client-side array mode" when using VBOs to compile a display list, and consequently crashing when trying to access 0x00000000 (ie offset 0 into the VBO). While I guess this isn't a normal thing to do, it _is_ allowed according to ARB_vertex_buffer_object.

In any event, I'll most likely try to do some GPGPUish stuff soon, and will hopefully be able to exploit some of the new stuff like TBOs and instancing.

ScottManDeath
03-26-2009, 10:53 PM
http://www.opengl.org/registry/specs/ARB/copy_buffer.txt

... contains the enumerants which were apparently missing from the glext.h file

Simon Arbon
03-27-2009, 12:06 AM
If you aren't in the group described above (migrating a legacy app up to 3.1 while wanting to continue using legacy func) then the story is very simple - focus on the core feature set and code to that. The spec is shorter, the coding choices are fewer. The driver will never spend any cycles wandering between fixed func and shader mode because your app won't be asking for that type of behavior any more.
But the driver doesn't know that you wont use an old feature so it still needs to make allowances in case you do, and still needs to load all the extra code into memory.
This can easily be solved with the addition of one bit in the attribute value for WGL_CONTEXT_FLAGS in <*attribList>:

We already have:
WGL_CONTEXT_DEBUG_BIT_ARB 0x0001
WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB 0x0002

To which we can add:
WGL_CONTEXT_PERFORMANCE 0x0004
which would remove all of the removed features and would not advertise any legacy extension.

WGL_CONTEXT_PERFORMANCE would load only the core streamlined driver with no legacy functions or software emulation, while the absence of WGL_CONTEXT_PERFORMANCE would load a separate DLL that impliments all of the legacy functions and supports 2.1 contexts on top of the core driver.
WGL_CONTEXT_DEBUG_BIT_ARB would load an instrumentation layer on top of the driver.
WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB and WGL_CONTEXT_PERFORMANCE together would remove both the removed and depreciated functions.

Stephen A
03-27-2009, 01:04 AM
What happens right now, if someone enables WGL/GLX_CONTEXT_FORWARD_COMPATIBLE and tries to access a function from ARB_compatibility? Is an error generated?

If not, then ARB_compatibility is simply a copout from supporting forward-compatible contexts (and isn't that what Nvidia intended all along?)

Lord crc
03-27-2009, 01:13 AM
If you read the presentation from when 3.0 was released, you'll see that the indended "retirement plan" for features were "Core -> Deprecated -> Extension -> Gone/Vendor specific extension"

So they're just following their plan.

RenderBuffer
03-27-2009, 01:27 AM
But the driver doesn't know that you wont use an old feature so it still needs to make allowances in case you do, and still needs to load all the extra code into memory.


Couldn't the driver reconfigure itself / load the code for legacy APIs only after some call to them was made?

Regardless, I like being explicit in my code, and I would prefer what you've suggested. From an aesthetic point of view it seems cleaner, and perception is important.

Auto
03-27-2009, 02:48 AM
Ok some questions / thoughts re the migration of code to 3.1.

My engine is x-platform CgFX based with a GL back-end, generally it's pretty up to date and relatively clean, though there are of course <= 3.0 parts in there that I'd like to clean out now.

Ideally I'd like the compiler to tell me what's outdated, however if ARB_compatability is enabled by default, and afaik not possible to disable it - does it mean going through my code by hand cross referencing stuff to the new spec, or is there some other method that will give me compile time errors?

ZbuffeR
03-27-2009, 02:54 AM
The spec suggest to use the <GL3/gl3.h> header instead of classic gl.h header. But nobody seem to know when it will be available :(

Heiko
03-27-2009, 03:43 AM
The spec suggest to use the <GL3/gl3.h> header instead of classic gl.h header. But nobody seem to know when it will be available :(

This would cause the driver not to load the ARB_compatibility extension?

Perhaps it will be available when the vendors release their first true GL3.1 drivers? I can't recall they advised to use <GL3/gl3.h> when OpenGL 3.0 was released. Perhaps this include dir is new since OpenGL 3.1?

Jan
03-27-2009, 03:48 AM
The copy-buffer extension looks nice and very useful. However, i have a few questions: The main reason for this is obviously loading data in parallel directly into a gl buffer. But how would i do such a thing? If i have 2 threads, would thread 1 (rendering) need to create / map the buffer, get the pointer to it, pass it to thread 2, which then fills it, returns "ok" and thread 1 then issues the copy? Or the more inefficient, thread 1 and 2 need to do context-switches? Which one would work? Can thread 2 write to a buffer, that was mapped by thread 1, at all?

And another thing. Did i get this correctly, that gl provides us with an implementation defined "write" (temp) buffer? So that i can actually only prepare ONE buffer at a time, instead of several. It is a bit unclear to me. Why not just prepare your data in some user-defined buffer and then just tell gl "copy this to that", why the extra READ/WRITE buffer semantic?

Jan.

Chris Lux
03-27-2009, 04:35 AM
If you read the presentation from when 3.0 was released, you'll see that the indended "retirement plan" for features were "Core -> Deprecated -> Extension -> Gone/Vendor specific extension"

So they're just following their plan.
but an extension means you opt-in the features. with the ARB_compatibility the features are just there...

skynet
03-27-2009, 04:40 AM
Why not just prepare your data in some user-defined buffer and then just tell gl "copy this to that", why the extra READ/WRITE buffer semantic?
Because GL has ugly bind points instead of an "object oriented" API. One buffer object can be bound to several bind points at the same time. So instead of specifying that you have to re-use existing bind points (which already have their own semantic), they decided to introduce two new bind points with a clearly defined semantic. I think, this is ok, because it allows you to copy data between two buffers without disturbing their original current binding.

Rob Barris
03-27-2009, 09:17 AM
The copy-buffer extension looks nice and very useful. However, i have a few questions: The main reason for this is obviously loading data in parallel directly into a gl buffer.


The central value is in being able to transfer chunks of data between buffers on the GPU efficiently, not necessarily using it as a tool for streamlining upload. Though you may have hit on an interesting idea here that should be tried.

ScottManDeath
03-27-2009, 09:53 AM
In D3D10, a common pattern is to create a resource with CPU write access, copy data from the CPU (either via mapping or resourcesubdata) and then copy it to a GPU-access-only resource.

So with the extension, one could have two textures (e.g. with the same size and the same number of bytes per texel), each backed by a pixel buffer object and then we could copy the PBOs into each other? So it would be possible to access a 32-bit float single channel texture as a 4 channel 8 bit fixed point texture to gain access to the individual bytes?

Chris Lux
03-28-2009, 04:46 AM
is there an ETA for the new glext.h and GL3/gl.h header files?

_SMK_
03-28-2009, 09:05 AM
Why exactly are geometry shaders expected to be depreciated soon?

ZbuffeR
03-28-2009, 09:15 AM
OpenCL ?

Ilian Dinev
03-28-2009, 10:28 AM
is there an ETA for the new glext.h and GL3/gl.h header files? Uhm, glext.h version 48 is up there for grabs in the repository. It's just not a clean slate.

Eosie
03-28-2009, 05:00 PM
I'd like to see the direct state access in the next version, but the question is whether it is performance-wise. I was told that glBind* calls do a hash table lookup, which sort of defends the idea of bind-once-use-many-times.

Concerning this ARB_compatibility extension, I don't think removing it will have a great impact on the stability of drivers since all important IHVs have implemented it anyway. What OpenGL really needs is extensive unit testing and official certification based on that by an independent authority. Everytime a new bug is found, a new test should be added to make sure the bug will never appear again in any implementation. Using OpenGL games and applications for driver testing is obviously not enough.

RickA
03-29-2009, 04:34 AM
I think there's a small error in the glspec31.20090324.pdf document regarding the Uniform Buffer Objects. On page 50, near the bottom it says:


Uniform buffer objects provide the storage for named uniform
blocks, so the values of active uniforms in named uniform blocks may be changed
by modifying the contents of the buffer object using commands such as Buffer-
Data, BufferSubData, MapBuffer, and UnmapBuffer. Uniforms in a named
uniform block are not assigned a location and may <u>be</u> be modified using the Uni-
form* commands. The offsets and strides of all active uniforms belonging to
named uniform blocks of a program object are invalidated and new ones assigned
after each successful re-link.

I imagine this is supposed to be

Uniform buffer objects provide the storage for named uniform
blocks, so the values of active uniforms in named uniform blocks may be changed
by modifying the contents of the buffer object using commands such as Buffer-
Data, BufferSubData, MapBuffer, and UnmapBuffer. Uniforms in a named
uniform block are not assigned a location and may <u>NOT</u> be modified using the Uni-
form* commands. The offsets and strides of all active uniforms belonging to
named uniform blocks of a program object are invalidated and new ones assigned
after each successful re-link.

Jan
03-29-2009, 05:32 AM
I'd like to see the direct state access in the next version, but the question is whether it is performance-wise. I was told that glBind* calls do a hash table lookup, which sort of defends the idea of bind-once-use-many-times.


Direct-state access should not need to do hash-lookups anymore, because GL3 introduced, that you are forced to generate objects using glGen* calls. It is an error to generate a handle by simply using it. Therefore hash-tables are not necessary anymore.


In practice that doesn't mean it isn't done anymore. It is hilarious how fast nVidia claims to support GL3+ simply by adding a "compatibility"-extension to their list. It should be called NV_we_dont_need_to_change_anything.

Jan.

Chris Lux
03-29-2009, 06:48 AM
[quote=Eosie]In practice that doesn't mean it isn't done anymore. It is hilarious how fast nVidia claims to support GL3+ simply by adding a "compatibility"-extension to their list. It should be called NV_we_dont_need_to_change_anything.
their forward compatible context seems to expose the extension but is missing the functionality, so i think/hope that they are taking advantage of the cleaner interface.

Rob Barris
03-29-2009, 09:52 AM
I think there's a small error in the glspec31.20090324.pdf document regarding the Uniform Buffer Objects. On page 50, near the bottom it says:


Uniform buffer objects provide the storage for named uniform
blocks, so the values of active uniforms in named uniform blocks may be changed
by modifying the contents of the buffer object using commands such as Buffer-
Data, BufferSubData, MapBuffer, and UnmapBuffer. Uniforms in a named
uniform block are not assigned a location and may <u>be</u> be modified using the Uni-
form* commands. The offsets and strides of all active uniforms belonging to
named uniform blocks of a program object are invalidated and new ones assigned
after each successful re-link.

I imagine this is supposed to be

Uniform buffer objects provide the storage for named uniform
blocks, so the values of active uniforms in named uniform blocks may be changed
by modifying the contents of the buffer object using commands such as Buffer-
Data, BufferSubData, MapBuffer, and UnmapBuffer. Uniforms in a named
uniform block are not assigned a location and may <u>NOT</u> be modified using the Uni-
form* commands. The offsets and strides of all active uniforms belonging to
named uniform blocks of a program object are invalidated and new ones assigned
after each successful re-link.

Your assessment of the typo is correct. Uniforms in a UBO may not be accessed with the old API.

Bruce Merry
03-29-2009, 01:16 PM
By the way, are those gl3.h headers available somewhere? And yes, why <GL3/gl3.h> not <GL/gl3.h> like <GL/glaux.h>, <GL/glu.h>

The name for the header file is largely an arbitrary choice. <GL3/gl3.h> matches OpenGL ES conventions (the header file for OpenGL ES 2.0 is <GLES2/gl2.h>).

ScottManDeath
03-29-2009, 02:29 PM
Your assessment of the typo is correct. Uniforms in a UBO may not be accessed with the old API.

Is there a specific reason for this? In D3D10, it is possible to use both buffer access api and the effect framework to update uniforms which are in constant buffer blocks in the shader.

Auto
03-30-2009, 02:05 AM
The spec suggest to use the <GL3/gl3.h> header instead of classic gl.h header. But nobody seem to know when it will be available :(

Ok that sounds a bit more like it - though I'm still a bit unclear about it; as if for example I set up my code to be including the gl stuff from GL3/gl3.h, but then I am including glext.h for some other extensions, wouldn't that mean that I'd implicitly be including the ARB_compatibility extension and therefore the compiler would just accept all my old code?

Is that right - or does anyone know whether there's going to be something within gl3.h that we can #define like a GL_DISABLE_COMPATIBILITY?

Thanks

Chris Lux
03-30-2009, 03:38 AM
a header alone will not do it. for example in GL3.1 you can not call glGetString with GL_EXTENSIONS, but the GL_EXTENSIONS token is still in the gl3.h header file because you can use it with another function. so including the gl3.h header does not ensure that your code is GL3.1 compliant.

ZbuffeR
03-30-2009, 04:24 AM
The spec refers to both gl3.h and gl3ext.h

Auto
03-30-2009, 04:49 AM
The spec refers to both gl3.h and gl3ext.h

Ah great - thanks

glfreak
03-30-2009, 10:48 AM
OpenGL doomed itself to be honest. With its current direction it will be neither a CAD preferable nor a good competitor to Direct3D 10/11 or even 9 which are already established in the game industry and video card standards.

OpenGL should at least stick to its playground where it rules, the CAD application arena!

Now GL is put face to face to D3D, and either game creators start adopting it sooner, or CAD companies start porting their stuff to D3D.

Ido_Ilan
03-30-2009, 11:51 AM
The interest in OpenGL is declining, the community has lost faith in the Khronos group regarding OpenGL. Just check the size of this thread vs the size of the OpenGL 3.0 release.

Just to put thing in perspective check the following link:
http://web.archive.org/web/19970707113513/http://www.opengl.org/
It says "Fast Gaming Graphics"!!! - just hope the Khronos will understand this, gaming is the driving force.

My two cents.
Ido

glfreak
03-30-2009, 02:06 PM
Well no doubt OpenGL out performs Direct3D at some areas. We just don't want a bare minimum API with no good support. At least 2.1 has all these nice CAD features that we all loved. and don't tell me the way hardware works changed and these nice features are no longer exist in the new hardware arena. If this is the case then it's another API that imposed and forcing these changes in the HW. And if this is true, then there's no hope.

Stephen A
03-30-2009, 03:11 PM
GlFreak, what are you talking about? Of course the hardware has changed. The API *needs* to change to reflect that, otherwise it's useless.

Do you remember Quake 2? Its OpenGL renderer pushed single triangles in immediate mode. This is simply no longer possible.

OpenGL is not playing solely for the CAD market, either (where, interestingly, D3D is making good inroads). It's main strengths are a) cross-platform support and b) extensions for large-scale rendering: VR, CAVE systems, scientific visualizations, this kind of things. And, frankly, those are the only advantages it has over D3D. Do you seriously think anyone would use OpenGL, if D3D could do quad-stereo (and swap groups) and was avalaible on Unix?

Maybe I'm overreacting, but I think the trend is evident on these boards and elsewhere. Drivers are the weakest spot: the API is so bloated and the interactions between extensions so complex that it is next to impossible to write a solid driver (it's 2009 and we *still* don't have stable FBOs - just try to blit a 24/32 bit depth buffer). What sane game developer would use an API with problems like that? (1. Blizzard and 2. ID. Hmm...)

OpenGL 3.0 was the first step in the correct path, 3.1 added much needed functionality, there are promising features avalaible as extensions (DSA) and OpenCL interop is going to be great. We are still missing (the oft-requested) shader blobs, a reliable way to issue commands from secondary threads (e.g. compile a shader without blocking the main thread) and a way to optimize state changes (something like lightweight display lists). And we are sorely lacking in driver stability.

Ok, end of rant. There's progress now and I would dearly love to see OpenGL make a comeback. :)

glfreak
03-30-2009, 10:29 PM
I said that before and I will keep say it again in hope of a change, though currently all my work is being ported to D3D for the stability and contemporary feature access.

OpenGL should be more than just a specification. 80% of its implementation should be taken care by the responsible group for its sepc, doing the deprecated features on top of accelerated core, and in software, provide full support fallback and implement the core on top of the IHV minimal drivers that exposes required functionality for the core.

No IHV is willing to spend their lives implementing full robust API, while they can expose their HW through drivers that an API operates on.

This is the way D3D works.

Even if u bring the spec to version 10, the problem will remain in the drivers, and stability on both sides, the performance and specification.

Ido_Ilan
03-30-2009, 10:38 PM
I love OpenGL, I started working with DirectX (5-7) and quickly moved to OpenGL because I love Open Standard, loved the ARB discussion that were posted on the site, love the API.
I know "love" is a strong word for a tool that you use for work, but I truly love it. I just want it to work.

OpenGL 3.1 is better then 2.1/3 but still If we move to a "new" API, I want the old one dead and not influencing me.
Some deprecated stuff in OpenGL should not be removed: thick lines and display list to name a few, but we must move to a cleaner API.

A utility kit like the DXUtil that support math, basic shaders, meshes etc, is also very needed, especially for beginners and to those who want to migrate to OpenGL 3.1 with ease (Maybe the nvMath could be extended and standardized?)

I truly hope OpenGL can have a comeback.
Ido

innuendo
03-31-2009, 12:41 AM
I love OpenGL, I started working with DirectX (5-7) and quickly moved to OpenGL because I love Open Standard, loved the ARB discussion that were posted on the site, love the API.

I know "love" is a strong word for a tool that you use for work, but I truly love it. I just want it to work.



Thank you, very much I agree with you. DX5-7 was terrible, no order, no way, only crazy zigzag :)


For one who admit DX10\11 - Do you remember 2000 year ? There 5-6 vendors were on the PC market! Bugs, bugs, bugs...

Microsoft made so stable product (DX9, DX10 )cause one reason - there are ONLY TWO VENDORS on PC market. If will be 3-5 vendors , I think it will not be to easy to reach agreement :)

Auto
03-31-2009, 02:11 AM
IMO it seems pretty fair to say most of the issues people face are indicative of the overriding issue in commercial software development per-se.

Meaning the fact that more often than not commercial software is released, (often costing > £1000 per license in the case of many 3D animation packages) but is generally far from stable. GL is no different from this.

Just about every artist and animator that I know all have their own mystic methods to circumvent crashes and bugs within the software they use, as do many graphics programmers working with drivers on various cards. How the software industry carries on like this I have no idea, however that's another issue entirely...

MS are also well known for releasing 'Flagship Operating Systems' with similar issues, however with DX there is generally very strong support from all IHVs, and their API has without doubt become very competitive over the last few years.

DX support is a pretty unique situation in graphics software, and is a bit of a shining light in that respect. That's perhaps one reason why everyone wants these qualities in GL. However what it comes down to is the bottom line - cash. Until there is a really good reason for all IHVs to pump more money into their GL development, to the point where they have parity with DX in terms of features and stability, it will likely conform to the same cycle of dropping behind, then gaining a slight lead in features but loss of parity, dropping behind, etc.

MS have an extremely strong position in this respect, and IHVs have to conform to the new spec. (in the most part) when it's released. There simply doesn't seem to be the same amount of urgency with OpenGL. I am generally impressed with Nvidia's support, however the same can't be said for the others.

For me as I write cross platform, there simply is no choice at the moment. If the future (re)turns to hardware accelerated software rasterizers as Intel are pushing all this may change, however for the moment, and it looks like the medium term future at least, there will likely be niggles with GL for many people working on the bleeding edge.

Having said all that I am still happy with the new release, whether it will inspire more games developers to get on board, which would help instigate IHVs to compete with said features and stability remains to be seen. However, there is more hope now.

delighter
03-31-2009, 02:12 AM
Hello, this is my first post here in these forums.

I think the main reason for OpenGL's decline is the main vendros' decision to apply market control through the api. How can you expect people to trust Khronos when the main hardware vendors behave like this towards their customers?

There are vast differences between pro and consumer boards in terms of performance and behaviour. How can you expect developers to trust an api that behaves differently based on the amount of money you _pay_, and _not_ based on the _chips_ you use?

Please, be honest and play nice (this goes to the main hardware vendors). Don't release two OpenGL drivers anymore. If you want to sell workstation products, do it with providing better support and hardware and not with destroying an api. This will allow faster and more valid feedback with the driver teams and restore people's trust in OpenGL and Khronos.

Small companies that don't have the resources for maintaining QA departments for testing drivers will not use OpenGL anymore. Even some major CAD developers are moving away from OpenGL exactly with this argument. OpenGL now exists only for linux/mac. Especially in the context of the global economic crisis, where companies are firing people and cutting expenses, cross-platform developing is a luxury now and not a choice. If you compare the performance of libraries that use both OpenGL and D3D on the most widely used boards (and these boards are consumer boards and _not_ worksation ones) you will see that Direct3D is performing better that OpenGL (see irrlicht and hoops3d).

I used OpenGL for 5 years now, and i'm fed up with the state OpenGL has got itself into. I don't want to cope with bugs anymore and I want an api that is predicatable in terms of performance on any supported hardware. I'm learning D3D10 now. This is not whinning nor trolling, this is just how the situation is and it is not going to change if there is not honesty between all the involved parties.

Heiko
03-31-2009, 02:25 AM
The interest in OpenGL is declining, the community has lost faith in the Khronos group regarding OpenGL. Just check the size of this thread vs the size of the OpenGL 3.0 release.


The main reason this thread is much smaller is that people who are visiting this forum are (for the largest part) not as outraged by the new specification as they were when OpenGL 3.0 arrived. People tend to complain massively on the internet when they don't like something (like getting an API that didn't look like anything they were promised after a year of silence). But when something good happens, or something on which people don't have a strong opinion, they are mostly silent.

Like now: OpenGL 3.1 has arrived, for most of us a surprise because we weren't expecting it this soon. So thats the first positive point. Second, the changes were not very big compared with OpenGL 3.0, so little to complain about. Some nice new features were put in the API, so another positive point. By putting all deprecated stuff in an extension, Khronos has shown they are moving away from the old stuff. Most people will like that, only some of them think putting it in an extension is not the way to go and some others don't like removing the old features. But these people are a minority I think. So, only a few people don't like this move, others are positive about it and some others don't care that much. Again, nothing to rant on on a forum.

Basically: OpenGL 3.0 had a lot of bad in it according to many people and it wasn't what they expected. So lots of things to rant about. OpenGL 3.1 has some nice new features, it is in line with what most people expected, little to rant on, a much smaller thread on the forum.

That is how the internet works...
Nothing to do with less interest in OpenGL.

My personal opinion is that OpenGL is moving in the right direction. The deprecation model makes the way free for a clean api. A clean api makes the way free for an rewritten api (the one that was promised). But even if that rewritten api wouldn't come, I am happy with the clean api we have now (there is still room for some nice features though, all mentioned already in this topic).

innuendo
03-31-2009, 03:56 AM
I want an api that is predicatable in terms of performance on any supported hardware. I'm learning D3D10 now.

There is true in your words :)
But, explain me <Why NVIDIA dont support DX10.1 ?>
Do you belive what will not be DX11.1, DX11.2 and ect ?
Do you belive what Microsoft dont ask for new OS for new DX versions ?
Do you belive what will not be problem with notebooks with D10\D11 ?

I dont.

delighter
03-31-2009, 05:31 AM
I don't know why NVIDIA does not support d3d10.1 and really I don't care. I want a modern api that works the same way on the same chips _now_. And OpenGL does not fit my needs any more. That's all.

I really don't like vista (performance-wise), but it has working d3d10 since 2007, and i'm typing now from my laptop with an ati 2600 card, where my d3d10 shaders, my heavy meshes and everything d3d that i throw at it just works (and actually not only works, but it works predicable). This is also true for my desktop that has nvidia.

I don't have the money to buy a workstation gpu (i'd rather spend it on more ram and a bigger cpu for rendering) and i don't want my clients to have to spend a fortune just to run my applications. That's all.
Let's hope intel with it's larrabee will not behave the same toward their customers, because the two major vendors seem to me that they will not change their politics any time soon, unless someone else shakes the waters in the gpu area. Until then, bye-bye OpenGL. It was good fun while it lasted.

innuendo
03-31-2009, 06:42 AM
I want a modern api that works the same way on the same chips _now_. And OpenGL does not fit my needs any more. That's all.



Fine. Just only one question. Why did you use OpenGL early ? What kind of reason ?

glfreak
03-31-2009, 07:08 AM
Dead end OpenGL :) Now we can focus on one API.

Groovounet
03-31-2009, 07:31 AM
Sorry to say that but this sound like free complaints, no idea bring here, no enough critique.

Yes, OpenGL 3.1 is not perfect but the Khronos Group is working well in the good direction, evolution need time but what's going on is good, the Khronos is aware of the issues and is working on it.

innuendo
03-31-2009, 09:32 AM
Dead end OpenGL :) Now we can focus on one API.

Fine. Just only one question. How long will you wait for D3D10 on *nix ? month, year, decade .. or never ? :)
Do you understand me ?

glfreak
03-31-2009, 10:38 AM
And how many commercial games or CAD software ppl support Linux?

If portability is the argument point then I would say it's not a strong point unless the other OS's have at least 50% of commercial market share.

For god sake there's only one platform, MS Windows whatever version.

Jean-Francois Roy
03-31-2009, 11:08 AM
For god sake there's only one platform, MS Windows whatever version.

There's this company called Apple that makes computers and phones that don't use one of Microsoft's operating systems. Maybe you've heard of them? There's also a free operating system going around that's pretty popular for servers and compute clusters.

Sarcasm aside, as long as Microsoft doesn't license DirectX to other software vendors or releases DirectX as an open specification, there will be a place for OpenGL. So what we should really focus on here is improving OpenGL and OpenGL implementations as much as possible, not posting more "Switching to DX!" or "There's only Windows in the world!" posts.

With that in mind, I'm pretty happy with the OpenGL 3.1 specification. It has some really good additions (UBOs and instancing for example) and is significantly smaller than previous versions, which makes it easier to implement correctly and easier to understand and use.

Just my opinion, of course...

innuendo
03-31-2009, 11:17 AM
And how many commercial games or CAD software ppl support Linux?



Look for past... Was OpenGL best API for games 5-10 years ago ?
I think not. Dont you know - DX is the best choise for games as well as OpenGL for Graphics :)

ector
03-31-2009, 11:58 AM
Stop whining and start coding. I'm also disappointed that OpenGL isn't the superb DX10-inspired design that was first proposed, but it's not absolutely terrible and is still a usable API, despite its MANY warts and pitfalls.

I've had to use Cg to compile ARB shaders instead of GLSL due to some extremely moronic design decisions on part of whoever designed GLSL, but it nevertheless works, even if it's not ideal. This is what we have to live with and it's moving in the right direction. GL3.1 almost makes GLSL possible to use for practical work, maybe it will be usable in 3.2?

glfreak
03-31-2009, 12:12 PM
I'm not crying that OpenGL is bad and we should switch...I proposed reasonable solution to make of GL a real standard and keep it moving on the right direction. Spec is not enough, any one can come up with amazing ideas and write the ideal spec. It's how it's implemented and modularized.

Rob Barris
03-31-2009, 01:51 PM
Ector, as always if you have specific use cases or functionality that you can describe here that would make your app easier to develop (or perform better etc) please post them in the perpetual "talk about your applications" thread too.

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=246133#Post246133

ector
03-31-2009, 02:04 PM
I've already talked about my main issue - the requirement to explicitly link vertex and pixel shaders. It's just weird. Why? It's not necessary when using either ARB_*_program, Cg or HLSL. My app uses insane amounts of combinations.

I also share about 40 float4 constants between all pixel shaders, and over 200 between all vertex shaders (not all shaders use all of them, of course, but there's a lot of sharing going on). It's just too much overhead to keep uploading them once per shader. And since GLSL only supports uniform buffers on the newest hardware, using them isn't an option, while classic indexed constants works just fine.

You may ask, why this inefficient-looking design? Well, simply because my app is Dolphin, the Wii and Gamecube emulator, and I'm using shaders to emulate one of the most complex pieces of fixed function graphics hardware out there. It has MASSIVE configuration state, which is what we're using all these constants for.

Rob Barris
03-31-2009, 02:17 PM
OK that is one that we are well familiar with and is near the top of the priority queue. It's an issue that I know Blizzard and TransGaming have both run into as well.

Ilian Dinev
03-31-2009, 03:34 PM
The need to link shaders into a program is simply because of the varyings. GLSL doesn't let you specify which slot to take.

Try glProgramLocalParameters4fvEXT. Though, it requires you abandon GLSL: use nVidia asm for GF, and for ATi/Intel either
1) use old ARB-asm for ATi/Intel or
2) use GLSL but have your materials manually upload uniforms in a non-brute fashion or
3) use GLSL and uniform-buffers/texture-buffers/VTF if available
Imho, if you want to have speed and support SM2/SM3/SM4, there are 5 code-paths.

innuendo
03-31-2009, 11:51 PM
T
Try glProgramLocalParameters4fvEXT.



Try void Uniform4fvARB(int location, sizei count, const float *value)

or void UniformMatrix4fvARB(int location, sizei count, boolean transpose, const float *value

uniform vec4 StreamVec4[ SIZE1 ];
uniform mat4 StreamMatrxi[ SIZE2 ];


instead ARB ASM ( which is obsoluted )

Тут по-русский понимает кто-нибудь ? :)

Ilian Dinev
04-01-2009, 02:14 AM
ARB asm is obsolete, but fast and supported correctly on all gpus.
NV asm is more modern than GLSL could hope to be; and loads/runs fast and without nV's shader-recompilation hiccups.
Afaik GLSL and its glUniform*() are fast on ATi, without hiccups - provided that they compile under a given card/driver combo.

glProgramLocalParameters4fvEXT's speed on E8500 @3.8GHz, SysRAM DDR3 @1.6GHz (timing 7-7-7-20), GF8600GT:
144 bytes: 108 cpu cycles
288 bytes: 224 cycles
1728 bytes: 607 cycles

So yes, gpu asm is as obsolete as C++ is to Java...

innuendo
04-01-2009, 02:54 AM
ARB asm is obsolete, but fast and supported correctly on all gpus.


Fine. If I wrote shader on DX10 will it run on DX9 hardware ?
Why is it so non equal conditional for GL and DX ?

Dark Photon
04-01-2009, 05:10 AM
The interest in OpenGL is declining, the community has lost faith in the Khronos group regarding OpenGL.
s/the community/you/. Please speak only for yourself and stop spreading FUD.

glfreak
04-01-2009, 07:24 AM
The Khronos could have added more core features easily, such as geometry shaders, direct state access, long promised templated objects :)

However it's crawling so that Direct3D is guaranteed to stay ahead by years.

JoshKlint
04-01-2009, 10:59 AM
I don't have much of an opinion because the only thing I see that affects me is the uniform buffer objects feature. I never liked bindable uniforms, and ATI's first attempt to implement it didn't work for me, so I guess this is a good thing.

I'm still using OpenGL 2.1, and I am mostly satisfied since ATI's drivers have gotten a lot better. I think an instance shader would be useful, that could discard entire instances of an object before they are drawn, for trees and grass.

Really, for me the whole issue has not been Khronos but the state of ATI's drivers. And they have improved to the point where they are now usable.

glfreak
04-01-2009, 12:17 PM
Why goddamn I have 2 stars now instead of 3? Because saying the truth? or because someone did not like my opinions and strong points? I thought these forums are open to ideas and it's something "open" as long as we stay polite and within the boundaries of inter personal communication. Whatever...

Jan
04-01-2009, 03:48 PM
How would such an instance shader work? Just curious how the GPU should be able to decide to render an instance or not (occlusion query?)

innuendo
04-02-2009, 12:50 AM
How would such an instance shader work? Just curious how the GPU should be able to decide to render an instance or not (occlusion query?)

You should read about NV_conditional_render...

Jan
04-02-2009, 01:49 AM
I know conditional render and use it myself. The thing is, if i render 100 instances, i can do an occlusion query to reject ALL of them. I cannot, however, somehow reject only single instances, because when i render them instanced, there is no way to do separate occlusion queries for each instance. It's always an all-or-nothing decision.

I agree, that such a feature would be interesting and useful, but having dealt with that problem myself, i wonder how EXACTLY such a feature should work.

Jan.

innuendo
04-02-2009, 02:35 AM
I know conditional render and use it myself. The thing is, if i render 100 instances, i can do an occlusion query to reject ALL of them.

Does DX10 support different occlusion query for instansing ?

EvilOne
04-02-2009, 04:11 AM
3.1 is a nice release... and the spec is cleaner than I expected.

Just one thing: Clean headers please. I wait for them since 3.0 forward compatible.

Until then, staying with D3D9.

Jan
04-02-2009, 04:49 AM
I know conditional render and use it myself. The thing is, if i render 100 instances, i can do an occlusion query to reject ALL of them.

Does DX10 support different occlusion query for instansing ?



No, OQ and conditional render support is identical on OpenGL and D3D.

Stephen A
04-02-2009, 06:04 AM
3.1 is a nice release... and the spec is cleaner than I expected.

Just one thing: Clean headers please. I wait for them since 3.0 forward compatible.

Until then, staying with D3D9.


Can you define clean? Because I *might* be able to do that.

bertgp
04-02-2009, 06:49 AM
I know conditional render and use it myself. The thing is, if i render 100 instances, i can do an occlusion query to reject ALL of them. I cannot, however, somehow reject only single instances, because when i render them instanced, there is no way to do separate occlusion queries for each instance. It's always an all-or-nothing decision.

I agree, that such a feature would be interesting and useful, but having dealt with that problem myself, i wonder how EXACTLY such a feature should work.

Conditional rendering relies on occlusion queries and could work pretty well. This could be implemented by an occlusion query per-instance on its bounding sphere. This could maybe be coupled with testing the bounding sphere against the clip planes before going through the actual occlusion query to save the rasterizing stage. I think only complex meshes would benefit from this though since it could be faster to directly instantiate simple meshes and let them go through normal frustum clipping and early-Z tests.

After writing all this however, I wonder: why not do coarse culling on the CPU and send the visible instance positions directly to the shader?

Ilian Dinev
04-02-2009, 07:02 AM
I agree, that such a feature would be interesting and useful, but having dealt with that problem myself, i wonder how EXACTLY such a feature should work.


Hmm. Scene graphs and their natively sequential traversal can be troubling. But if the cpu makes traversal linear and parallelisable (quite possible), considering what gpus can/should do, here's an idea on how an Instance Shader can realistically (imho) look in a next-gen gpu:



// this instance-shader is called once per instance
// all of these uniforms below are user-specified, not expected by GL
// added tokens: gl_OcclusionBBMin and gl_OcclusionBBMax

uniform mat4 uniMVP; // matrix projection-view, or projection-view-world (in case of portals, clustering)
uniform vec4 uniFrustumPlanes[6];
uniform float uniBoundingSphereRadius;


bindable uniform vec3 buniInstancePosition[]; // element at index gl_InstanceID is used here
bindable uniform mat3 buniInstanceRotation[]; // element at index gl_InstanceID is used here


uniform vec3 uniBoundingVolumeVerts[3*12]; // a convex box in this case. Could be something more obscure. Could be dependent on gl_InstanceID.




void main(){
vec4 pos = uniMVP * buniInstancePosition[gl_InstanceID];

if(m_ClipSphereByFrustrumPlanes(pos)){
clip();return;
}

mat3 rot = buniInstanceRotation[gl_InstanceID];
mat4 nodeTransform = uniMVP * m_Make4x4FromPosAndRot(pos,rot);

vec4 minXYZW = vec3(1.e+5,1.e+5,1.e+5,1.e+5);
vec4 maxXYZW = vec2(-1.e+5,-1.e+5,-1.e+5,-1.e+5);

//------[ secondary rough occlusion test via a lowest-poly mesh ]--------[
// a box, consisting of 12 triangles is used here, and 12 can be the
// imposed maximum count of triangles to test occlusion with.
// Uses ZCULL and optionally EarlyZ
// (ZCULL being roughest, fastest z-culling test,
// EarlyZ being fast but less rough z-culling test)


for(int tri=0;tri<12;tri++){
for(int v=0;v<3;v++){
vec4 vpos = nodeTransform * uniBoundingVolumeVerts[tri*3+v];
gl_Position = vpos;
minXYZW = min(minXYZW,vpos);
maxXYZW = max(maxXYZW,vpos);
EmitVertex();
}
EndPrimitive();
}
//----------------------------------------------------------------------/

//----[ primary, roughest occlusion test via a screen-aligned quad ]---------[
// uses only ZCULL. If it doesn't pass ZCULL, the secondary test is skipped.

gl_OcclusionBBMin = minXYZW;
gl_OcclusionBBMax = maxXYZW;
//---------------------------------------------------------------------------/



}


bool m_ClipSphereByFrustrumPlanes(in vec4 pos){
// here use uniFrustumPlanes and uniBoundingSphereRadius to do preliminary frustum culling
}

mat4 m_Make4x4FromPosAndRot(in vec4 pos,in mat3 rot){
// some maths
}

Notes:
The triangles from the secondary rough-occlusion test do not modify z-buffer, color-buffers or stencil-buffer. Those triangles are not further transformed by the currently-bound vertex shader, and do not use the currently-bound fragment shader. The result from the shader is a single bool (stored internally in a bit, byte, int). The shader is executed before drawing the mesh-instance, or preemptively executed for several mesh-instances. The latter version improves usage of paralellism, but can give false positives (i.e instance 4 is occluding instance 7, but #7 being regarded as visible, as we have batch-computed the visibility of instances 0..10).

Further improvement:
Addition of "int gl_IBO_FirstIndex=0", "gl_IBO_Length" and "int gl_VBO_FirstIndex=0", to specify what range of the VBO and IBO (index buffer) this mesh-instance should use.
This can be used to let the shader select a LOD version of a model, or use a different model altogether (but still with the same bound shaders, render-states and render-targets).

Further optional improvement:
Have the gpu write results from the instance-shader to a byte-buffer-object, created by the user. That buffer is initially reset to "true" for all instanceIDs, and is required to be at least NumInstances big. If an object is occluded (as decided roughly by the instance-shader and its querying of ZCULL via those triangles), then gl_InstanceVisible[gl_InstanceID]=false; . The user can then use glMapBufferRange to retrieve occlusion info.
This can be used as feedback on which instances were drawn, and to do cpu-side computation regarding the result.

Btw, http://www.delphi3d.net/forums/viewtopic.php?t=183 . 13 million unique triangles in a very complex mesh, running at 200fps even on a GF8600GT. Sure, it's just one material - but awesome results nonetheless. Other than that, useful stuff are portals with hidden encapsulated volumes (i.e apprimation of big occluders like walls as quads) drawn before everything else; and using many octrees to group stuff, all occlusion results being deciding which instances you should put in a rough bucket-sort (PSX OrderingTable-style) by material.

EvilOne
04-02-2009, 09:07 AM
By clean I mean only tokens and functions of 3.0 and higher. It's a rather annyoing mess to do this by hand... Although I think doing it by ourself is a bad idea - surely there will be an official release - and Khronos timeframe wise we maybe get it 2010.

A nice thing would be a clean enum.spec and gl.spec so everyone can reparse into his preferred language.

Stephen A
04-02-2009, 09:26 AM
I agree that an official release would be much better, preferrably in an XML format. Unfortunately, it's also a fact that we may have to wait a long time for such a release.

As it happens, I've spent quite a lot of time tidying up the 3.0 and 3.1 .spec files and I've been toying with the idea to a) write a simple XML exporter for the existing specs and b) remove all deprecated functions from the result. I already have a (rather messy) .spec -> C# exporter and it would be relatively easy to modify it to generate C or even C++ headers (with overloaded functions).

If anyone wishes to help, just send a PM and we can work this out.

innuendo
04-02-2009, 11:40 AM
How long will we wait for gl3.h ? I have no patience :)

glfreak
04-02-2009, 02:15 PM
What about a Khronos OpenGL.dll, instead of OpenGL32.dll that comes with windows? This should support 3.1 and provides software impl too. This DLL can have more inside that makes it easier for driver implementers :)

M/\dm/\n
04-03-2009, 03:17 AM
What about a Khronos OpenGL.dll, instead of OpenGL32.dll that comes with windows? This should support 3.1 and provides software impl too. This DLL can have more inside that makes it easier for driver implementers :)
That would make redistribution harder.

Now you can just compile the program and upload it for users. If they will include opengl.dll, you will have to redistribute it, there will be problems with linking and so.

I really like that you don't have to download (Khronos OpenGLX 11 redist) and pack it with your app.

Those "ARB" and non "ARB" function names are confusing for getprocaddr, but other than that, I like the current approach.

glfreak
04-03-2009, 07:11 AM
That would make redistribution harder.


How? The OS can be shipped ready with this dll. Even if this was true, you have to install D3D run-time too beside the drivers :) and ppl are okay with this!

Chris Lux
04-03-2009, 12:18 PM
from section 4.2.1 from the OpenGL 3.1 spec:


If a fragment shader writes to gl FragColor, DrawBuffers specifies a set of draw buffers into which the single fragment color defined by gl_FragColor is written. If a fragment shader writes to gl_FragData, or a user-defined varying out variable, DrawBuffers specifies a set of draw buffers into which each of the multiple output colors defined by these variables are separately written. If a fragment shader writes to none of gl_FragColor, gl_FragData, nor any userdefined varying out variables, the values of the fragment colors following shader execution are undefined, and may differ for each fragment color.


now with this i have a problem. gl_FragColor and gl_FragData are deprecated and removed from OpenGL 3.1 and GLSL 1.40. i understand how this works with a single render target (specify something like 'out vec4 output_color;' and write to it. BUT how do i alias my multiple render targets with my output variables. is it the order in which i define them? if so, then this is weak...

i think this issue needs some work. i do not understand why this binding was removed, gl_FragDepth etc. is still there and for me it makes no sense to remove the binding to the fixed function raster operations that are still there in the hardware without giving us a better way to define where the output goes.

so please could someone explain how this MRT business is supposed to work (or correct me if i am wrong)...

-chris

S.Seegel
04-03-2009, 12:47 PM
You can use the function

void glBindFragDataLocation( GLuint program, GLuint colorNumber, const char *name );

to define which fragment shader out variable is written into which color buffer.
I'm not quiet sure about it but I would expect that even in your one-out-var-only example you should call glBindFragDataLocation !

Jan
04-03-2009, 01:36 PM
Yes, you _should_ call it, even in the one-data example, but somewhere the spec mentions, that with only a single out variable that one will write to either the first or all color attachments (not sure about it exactly), but anyway, it would "work".

Jan.

Jan
04-05-2009, 02:58 AM
Since i never got any "official" feedback on this idea, i'd like to bring it up once again:

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=251694

Is there the possibility, that we might get such a feature in GLSL 1.5 ? Should not be too difficult to support.

Jan.

JoshKlint
04-05-2009, 01:00 PM
Regarding my instance shader suggestion:
When you are dealing with vegetation, why is there a need to send the data of each instance to the GPU at all? Why not just draw all you need and let a pre-pass instance shader discard instances based on parameters like vegetation density, etc. The terrain heightmap could be uploaded as a texture to be used to make grass appear only within certain height or slope constraints. You could add a density texture to paint grass onto a terrain with no overhead, and without ever storing all those 4x4 matrices in memory. You could just evaluate 100,000 trees using the parallel pipeline and do a frustrum test for each one, discarding those instances that are not in the camera view.

It makes sense to let the parallelized hardware handle this because a large number of concurrent simple operations are what a GPU is best at.

The instance shader would get run for the number of instances you specify in glDrawElementsInstanced(). It might look something like this:

#define terrainwidth 1024

uniform sampler2D heightmap;
uniform sampler2D bumpmap;
uniform sampler2D densitymap;
uniform vec3 terrainscale;
uniform vec4 frustumplane[6];
uniform float maxslope;
uniform float aabbradius;

varying mat4 instancematrix;

void main( void ) {

//Get the x and y positions and texture coordinate
float x = gl_InstanceID / terrainwidth;
float y = gl_InstanceID - x * terrainwidth;

vec2 terraincoord = vec2( x / float(terrainwidth), y / float(terrainwidth) );

x = x - terrainwidth/2.0;
y = y - terrainwidth/2.0;

//Get terrain height
float terrainheight = texture2D( heightmap, terraincoord ) * terrainscale.y;

//Create 4x4 matrix for instance varying, gets passed to vertex program
instancematrix[3][0]=x;
instancematrix[3][1]=terrainheight;
instancematrix[3][2]=z;

//Check density map
if (texture2D( densitymap, terraincoord ).x<0.5) {
discard;
}

//Check terrain slope
float slope = texture2D( bumpmap, terraincoord ).y * 2.0 - 1.0;
if ( slope > maxslope ) {
discard;
}

//Culling test
for ( int i = 0; i <= 6; i++ ) {
if (PlanePointDistance( instancematrix[3].xyz, frustumplane[i] )<-aabbradius) {
discard;
}
}

//If the instance hasn't been discarded by this point then it will be drawn!

}

Brolingstanz
04-06-2009, 06:21 PM
IMO the distinction between making the geometry shader feature core or not would be more significant, if there were vendors that were avoiding implementation of the extension. i.e. if you as a developer wanted to use it but found out that a particular IHV had not implemented it, this could pose a real problem. But my understanding is that it is readily available on both AMD and NVIDIA implementations. So since the actual hard work of implementing it is complete, I would expect that extension to be around as long as the feature still exists in the hardware. Whether that interval is "forever" or "a couple of generations" I do not know.


Yes and perhaps it's safe to say the idea of data amplification will be with us indefinitely, whereas the the form it may take from one hw generation to the next is up for grabs and relatively unimportant to some of us besides. If we don't get too attached to the form but rather the overall function, there's less to lose when the form does finally change -- needless to say it will, eventually. ♫

Brolingstanz
04-06-2009, 08:34 PM
Kudos to the ARB on the 3.1 spec!

Really digging the uniform blocks in particular, some really nice changes and additions there.

LaBasX2
04-08-2009, 02:07 PM
Hmm, I really like OpenGL but I think we are in a quite bad situation now. For me the worst thing is that OpenGL 3 supports exclusively D3D10 level hardware. It is nice to see that there are some improvements to the API coming up now but we can't really profit from them if we want to continue supporting "older" (mainstream) hardware. If I want my engine to run on an actually great GF 7900 I need to stick to GL 2.1 with all its flaws. Well, as you all know one of the most serious of them is the lack of binary shaders which makes using übershaders quite inefficient since you can't create a shader cache.

I must also say that I think this deprecation thing will not bring anything in practice. None of the current applications would run with a driver which does not expose the deprecated stuff. And which hardware vendor would write a driver which is useless in practice?

The original idea was great and well reasoned: make a GL 3 which runs on all shader-compatible hardware just with a better API and provide a compatibility layer for legacy applications. But now things appear like a real muddle to me :(

Rob Barris
04-08-2009, 02:32 PM
LaBasX2, which features do you see in OpenGL 3.x that you would want to use in your applications ?

Some of them which can be implemented on pre-GL3 hardware are being released and implemented as ARB extensions. So if you can be more specific about which ones you want to use on the older hardware, you might find there's an answer that will work for your code.

edit: I see you explicitly mentioned binary shader caching, you're quite right in that this is not possible with 3.x yet.

LaBasX2
04-08-2009, 07:17 PM
Rob, thanks for your reply.

One of the most important topics is certainly the shader system. As others stated before, it should be possible to bind vertex and fragment shaders separately. This in combination with the binary shaders would help fighting the permutation explosion in visually complex games. In theory it would be the best to have a common compiler which compiles code for a profile (as in D3D) so that a prebuilt shader cache can be shipped with the title. But I see that a centralized compiler is difficult with OpenGL's philosophy, so it would also be ok if the shader cache were compiled at the user.

Another thing very useful for deferred shading would be the depth bounds test promoted as core feature (although I'm not sure if ATI can support it natively).

Support for geometry instancing and more control over maximum mipmap levels (for texture streaming) would be other things.

I understand that the features can be exposed as ARB extensions to pre-GL3 hardware but then we will continue having all the problems that we have now: extensions not implemented on some hardware and unstable drivers. Probably it will become even worse since vendors need to support a GL2 and a possibly quite different GL3 path.

The idea of introducing the new API in GL3 and adding D3D10 features in GL4 was a very natural thing. You know that D3D11 is adding backwards compatibility for older hardware again. I think the same needs to happen with the next version of GL. Different hardware generations should be supported by profiles which define a certain feature set. If GL3 will remain limited to D3D10 hardware I don't see how it should have any chance to compete against D3D11 for professional game development.

skynet
04-08-2009, 07:50 PM
If I want my engine to run on an actually great GF 7900 I need to stick to GL 2.1 with all its flaws.

Just don't use the "bad" stuff that is marked as deprecated and you'll get decent performance. It seems, most recent extensions will also be available in a 2.1 context, so people can stick with 2.1 for a long time if they really want.


None of the current applications would run with a driver which does not expose the deprecated stuff.

I'm getting tired repeating this over and over again. This statement is just plain wrong. When is the day that people finally understand, that 3.0+ (formerly "Longs Peak") has NEVER been about ditching GL2.1 from one day to another? Both versions would have co-exististed in peace for quite a while. While the old one would mainly get bug fixes, keeping the old API stable, the new one would have got the nice new features.

I don't know how good the deprecation model will work out, but probably not as good as a new API would have done. I'm currently trying to port an engine over to "pure" GL 3.1. I'm facing these problems:

1. The spec language gets confusing and inconstent. There are dozens of binding points for everything. A more "object oriented" approach would have been better. Sometimes there are several ways to retrieve the same information. As a reader, one clearly sees that the spec is a patchwork of many former extensions. It just doesn't look "monolithic". The PDF needs much more hyperlinks in between the document. It is currently very hard to gather information on specific commands, because it can be spread throughout the whole document. A second MSDN-like documentation would be very helpful.

2. The header/libs are not separated from pre-GL3.0 ones. This means, that a) the compiler won't moan about using deprecated commands/enums b) code-suggestion tools like VisualAssist are much less effective (sometimes you have to choose between a vendor/EXT/ARB/core version for _one_ enum!)

3. Even though I requested a forward-compatible 3.1 context, I still get a huge list of extensions, where most of them refer to either deprecated (GL_NV_register_combiners or GL_EXT_texture_env_add anyone?) or superseeded-by-core functionality. What is the advice here? Use them? Do not use them? Prefer corresponding core functionality? It would be good to cut down the extension list as much as possible in order to force people using the preferred way of doing things.

LogicalError
04-09-2009, 12:25 AM
LaBasX2 ... I see you explicitly mentioned binary shader caching, you're quite right in that this is not possible with 3.x yet.

... yet?
Hmmm, interesting ...

Brolingstanz
04-09-2009, 02:16 AM
I thought so too ;-)

Brolingstanz
04-09-2009, 02:19 AM
Skynet, I just put on my 3.1 blinders and that's all I see. Anything not in the 3.1 frustum gets culled, simple as that.

And don't tell me register combiners have been deprecated - I'm just now getting used to them.

Ilian Dinev
04-09-2009, 04:15 AM
Aren't register combiners generating nvASM shaders anyway? Why not go for the real deal - nvASM4?

Brolingstanz
04-09-2009, 05:32 AM
Basically because I don't have the good sense to do so.

Besides, RCs are way cool.

Dark Photon
04-09-2009, 05:47 AM
Besides, RCs are way cool.
ROTFL! :D (:p)

Jan
04-09-2009, 05:49 AM
RCs were a pain in the butt back in the Geforce 3 area and i ditched them as soon as possible. Someone describing them as "way cool" is the least thing i expected.

Apart from that, i would expect that the RC path hasn't been maintained in drivers for years, i wouldn't use them simply because of that.

Jan.

Brolingstanz
04-09-2009, 05:55 AM
lol

Well, I missed the RC boat when it sailed, but I'm sure it was all the rage at the time. *ducks and runs away*

JoshKlint
04-11-2009, 11:29 AM
You know that D3D11 is adding backwards compatibility for older hardware again.
I wouldn't worry about that too much until they actually deliver something. They're not going to magically turn everyone's X1500s into GEForce 8800s.

EvilOne
04-13-2009, 10:41 AM
<GL3/gl3.h>

EvilOne
04-13-2009, 10:48 AM
Hmmm, and please do us all a favour and remove the Pipeline Newsletter(s) from the front page. Outdated information about an epic fail are not that usefull...

ScottManDeath
04-13-2009, 03:24 PM
No, they are not, and that is not the point for D3D9_on_D3D10/11

The point is to have a API covering recent hw generations with a unified API, exposing coarse granularity of features. This makes it easy to write apps for shader centric hw.

And so far, regarding D3D, MS has continously been delivering, while keeping people informed relatively far ahead.

D3D11 tech preview with documentation has been out since last November, this is a year before the release of the final version(assuming Win7/D3D11 gets released in Fall).

At GDC, Khronos hinted vaguely that GL 3.2 will be out in a year or so, but no specifices on a level comparable to D3D11 have been released.

kRogue
04-14-2009, 04:39 AM
In defense of Khronos: MicroSoft has the ability to dictate the API as it is developed by MicroSoft; Khronos has to work in co-operation with more and cannot just dictate, if they go that route, then GL will not look very attractive. Additionally, all is not roses in DX land either: not all hardware does DX10.1 features (in fact I think all nVidia hardware does not support most DX 10.1 that is not in DX 10.0, someone correct me if I am wrong please) And as far as Khronos delivering, the GL 3.1 spec was a pleasant surprise for a lot of people in that it came out within 6 or so months of 3.0, with really useful stuff in core; (just try to get over GL 3.0 being late, ok?)

I would put even money on GL exposing all that DX10/11 do by the middle of next year AND working in places besides just Vista/Windows 7. As of now the main gripes are direct state access (which is no way near as a big deal as it used to be since the fixed pipeline is bye-bye), separate texture data from filtering, interchangeability of vertex/fragment/geometry shaders (which is possible via ASM interfaces though), pre-compiling of shaders and some fine control of MSAA stuff; (for me, there is one more NV_depth_clamp);

JoshKlint
04-15-2009, 11:24 AM
I agree that precompiled shaders would be nice.

I don't really have a lot of complaints. ATI's drivers are pretty good now, and shader model 4 does just about everything I want.

Just cull the old crap and make sure ATI is following the spec.

Stephen A
04-16-2009, 12:07 AM
As of now the main gripes are direct state access (which is no way near as a big deal as it used to be since the fixed pipeline is bye-bye), separate texture data from filtering, interchangeability of vertex/fragment/geometry shaders (which is possible via ASM interfaces though), pre-compiling of shaders and some fine control of MSAA stuff; (for me, there is one more NV_depth_clamp);
I agree with all of your points, except for DSA. While the current DSA extension interacts with everything (and looks very difficult to implement), a core DSA spec targeting the forward-compatible GL3.2 API would be very welcome.

Noone in their right minds would rewrite their GL1.1 or even GL2.0 codepaths to use DSA. On the other hand, I don't think anyone would object to replacing the following code:


int previous;
GL.GetInteger(GetPName.Texture2D, out previous);
GL.BindTexture(TextureTarget.Texture2D, current)[
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, TextureMagFilter.Linear);
GL.BindTexture(TextureTarget.Texture2D, previous);

with this:


GL.TexParameter(current, TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, TextureMagFilter.Linear);

(The code is C# via the OpenTK bindings)

On another note, are there any plans to update the reference pages to OpenGL 3.1? They are very useful, but still left on OpenGL 2.1.

Brolingstanz
04-16-2009, 07:15 AM
A nice thing would be a clean enum.spec and gl.spec so everyone can reparse into his preferred language.

No doubt this would be very cool. As the registry page suggests, XML-based versions are en route, but in the meantime we are cordially invited to exercise patience.

P.S. For those thin on patience there's a gem of a Perl script in this thread (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=244439#Post244439) that'll get you started. All you need to do is rework the output to the flavor of choice - most everything else is in the bag.

Y-tension
04-26-2009, 09:30 AM
GL3/gl.h GL3/glext.h?

Zonkie
05-14-2009, 07:30 AM
Anyone knows what is the status of

glEnable/Disable(GL_AUTO_NORMAL);

It's not listed in depreciated things nor included in core specs.

Regards,
Z.

Brolingstanz
05-14-2009, 07:53 PM
It's not listed because it's part of the NURBS evaluator thing, which is deprecated.

Random
05-22-2009, 12:24 AM
In defense of Khronos: MicroSoft has the ability to dictate the API as it is developed by MicroSoft; Khronos has to work in co-operation with more and cannot just dictate, if they go that route, then GL will not look very attractive.
Then why is DirectX looking so attractive?


Additionally, all is not roses in DX land either: not all hardware does DX10.1 features (in fact I think all nVidia hardware does not support most DX 10.1 that is not in DX 10.0, someone correct me if I am wrong please)
Nope, you are correct there sir. Nvidia has not jumped in the 10.1 train, but why would they? DirectX 11 will be coming out this year with Vista SP2.


And as far as Khronos delivering, the GL 3.1 spec was a pleasant surprise for a lot of people in that it came out within 6 or so months of 3.0, with really useful stuff in core; (just try to get over GL 3.0 being late, ok?)
But that's the whole issue. There shouldn't be a surprise associated with this. There should be a long period of communication, dialog, and testing.

The fact of the matter is, while consensus is great, it's rarely ever reached. Compromise is the enemy of progress, and the fact of the matter is, The Khronos Group is at the whims of the people paying the money, rather than the people using their API. Microsoft has a good API because they can take all the input from everyone, but they still get the last say, and they get to decide the direction of the API.

Edit: I would just like to mention, I'm rooting for OpenGL here, really am. But unless something happens very quickly, I don't think it's going to be able to recover.

mdriftmeyer
06-01-2009, 08:44 PM
Won't recover? What are you talking about? Linux and OS X feature OpenGL as their choice.

The last time I checked, Microsoft is declining in growth, not sustaining nor growing in growth.

With the expansion of OpenGL in both Linux and OS X, not to mention OpenCL [with Cocoa APIs in OS X] you can bet their will be a shot in the rear for OpenGL.

Executor
06-20-2009, 12:43 AM
Why anisotropy filtering not in gl3 core?

Ilian Dinev
06-20-2009, 05:29 PM
The last time I checked, Microsoft is declining in growth, not sustaining nor growing in growth
Realistically, I don't see serious people or gamers abandoning Windows, and Win7 is looking good. Can you imagine a professional or a gamer to throw away their library of software (that they use for work or entertainment), costing thousands of USD, and plunge into an OS where his tools are definitely unavailable (or have gimped-down alternatives)?
Fortunately there's virtualisation.

Executor
07-06-2009, 11:58 PM
Up... Can somebody explain about anisotropy? Why not in gl3 core?

dorbie
07-16-2009, 11:06 AM
* Can you please push HARD for decoupling vertex and fragment shaders, so that you don't need to explicitly link them together and can mix & match as needed, like with ARB_programs or in DX?

The driver cannot know what to optimize out of a vertex & fragment shader until they are linked. The linking & combination of shaders is an area of significant optimization in drivers.

You may be more careful about what you put in your shaders but some developers exploit link time optimization to reduce the shader code they have to write.

HenriH
07-30-2009, 11:13 PM
I like the look of OpenGL 3.1 and where it is going to, but still one important thing is missing: the unified object model which was originally planned for.

Brolingstanz
07-31-2009, 03:22 PM
I'm guessing they'll sneak the object model in at some point along the line, one piece at a time. Though probably not exactly the form originally planned for LP, it might just be a form we can live with.

I for one have no complaints at this point, besides - 3.1 is a heck of a nice API to use.

HenriH
08-02-2009, 03:26 AM
Well yes, 3.1 is obviously an improvement from 3.0 but the API is not yet clean and streamlined as I would hope it to be. At the moment it is using multiple object paradigms -- glGenXXX and glCreateXXX along with many many entry points for setting up parameters. Also confusing is the method of binding objects to the state system before you can do things. The unified object model would clean things up but I guess it would break compatibility too much. It will be interesting to see how things evolve from this point on though.

Brolingstanz
08-02-2009, 08:12 AM
Bind-to-edit is a bit of a sore spot, but it is something that can be hidden from view with a clever wrapper (e.g. DSA).

We're closing in on having all of SM4 in 3.2, which, aside from some hand waving and a bit of aesthetic ivory-towering, is what I'm all about these days.