PDA

View Full Version : OpenGL 3 announced



Khronos_webmaster
08-09-2007, 05:44 AM
The OpenGL ARB officially announced OpenGL 3 on August 8th 2007 at the Siggraph Birds of a Feather (BOF) in San Diego, CA. OpenGL 3 is the official name for what has previously been called OpenGL Longs Peak. OpenGL 3 is a great increase in efficiency in an already excellent API. It provides a solid, consistent and well thought out basis for the future. OpenGL 3 is a true industry effort with broad support from all vendors in the ARB. The OpenGL 3 specification is on track to be finalized at the next face-to-face meeting of the OpenGL ARB, at the end of August. This means the specification can be publicly available as soon as the end of September, after the mandatory 30 day Khronos approval period has passed. Also presented today were the changes to the OpenGL Shading Language that will accompany OpenGL 3.

We look forward to your discussions in this thread.

bobvodka
08-09-2007, 05:59 AM
While it's nice to finally have a version number (I always suspected it would be 3.0, major changes and all that) until such time as the SIGGRAPH slides with GL3 and GLSL details appear online there isn't a great deal to discuss.

Roll on the release of the spec however :)

elFarto
08-09-2007, 06:15 AM
Woohoo :cool:

End of September though :(

Regards
elFarto

Zengar
08-09-2007, 06:22 AM
Very nice! Keep on the good work!

Khronos_webmaster
08-09-2007, 07:08 AM
Most of the slides are now online here (http://www.khronos.org/library/detail/siggraph_2007_opengl_birds_of_a_feather_bof_presen tation/) . My apologies for the delay on making those available. Two more slides coming later this morning.

Nighthawk
08-09-2007, 08:10 AM
Nice :)

I wonder when working implementations of GL3 are expected in Nvidia/ATI drivers? End of september already, or will it take some months more?

ebray99
08-09-2007, 08:28 AM
I wonder when working implementations of GL3 are expected in Nvidia/ATI drivers? End of september already, or will it take some months more?I'm interested in this too. Is there any chance we can get someone to give us a "rough and guestimated" time frame? =)

Chris Lux
08-09-2007, 09:55 AM
It is very good to know all this, but i was really hoping for more at this point. But hey we waited this long, two more months is not that bad.

What i really hope for is that the new promises on OpenGL 3 reloaded and Mount Evans (3-5 months) can be held.

Really looking forward to seeing more of this OpenGL 3 ;)

-chris

Seth Hoffert
08-09-2007, 10:56 AM
Nice work everyone!!!

I immediately read all of the PDFs and boy am I excited. Can't wait for the new object model!

-HexCat

Khronos_webmaster
08-09-2007, 12:13 PM
The final 2 PDF's are now uploaded. They are:
siggraph-2007-bof-shading-language.pdf
siggraph-2007-bof-SPECviewperf.pdf

Enjoy!

Overmind
08-09-2007, 12:53 PM
From the GL3 presentation:

Program Environment object
Immutable reference to program object

Program Object
Fully immutable
Great :D

Rolf Schneider
08-09-2007, 12:59 PM
Ok, it's good that OpenGL will be improved!
However, what about existing applications? What does it mean in practice, when glBegin/glEnd will no longer exist in OpenGL3? Will there be 2 different versions of OpenGL that come with a vendor's driver (e.g. libGL.so & libGL3.so).
To be honest, I'm afraid that at some point in the future new drivers will only support OpenGL3, which means old applications will not work anymore! I think that this is not a good development for OpenGL...

elFarto
08-09-2007, 01:00 PM
Originally posted by Khronos_webmaster:
The final 2 PDF's are now uploaded.Thanks,

Good things:
in/out/inout, much clearer than attribute/varying
#include, although I don't see how this will work
switch statement

Iffy things:
common blocks not being able to take bools/ints and samplers..?

Not sure:
What does the multiple in's/out's mean in the last diagram slide?
"No state tracking" for built-in uniforms? Does that mean they are going away?

Regards
elFarto

Michael Gold
08-09-2007, 01:15 PM
Originally posted by Rolf Schneider:
To be honest, I'm afraid that at some point in the future new drivers will only support OpenGL3, which means old applications will not work anymore!I don't see vendors dropping support for GL2.x anytime soon, if ever. However there will come a time when it is no longer enhanced (i.e. no new versions or extensions) and primary development will focus on the new version.

Korval
08-09-2007, 01:16 PM
Hmm. The GL3 overview was too... overviewish. We've had this same overview for the past year. I wanted to see something more in-depth.

The two significant new concepts added are the Program Environment objects and the Push/PopAttrib stuff.

First, the last. This is a great feature, and I'm glad to see it in 3.0.

As for Program Environment objects... I've been saying since day 1 (go ahead. Do a search on the forums and check) that Program objects needed to be instanced. That the Program was just the data and the attachments to it needed to come from an external object that referenced the program. I'm glad to see that the ARB finally saw the light. Now we won't have to have innumerable copies of program objects running around just because we use them with different buffers and such.

elFarto
08-09-2007, 01:21 PM
Originally posted by Overmind:
From the GL3 presentation:

void DrawElements( enum mode, const sizei *count, const
sizeiptr *indices, sizei primCount, sizei
instanceCount );Why is the index array in client memory? When we get rid of client side vertex arrays, why not get rid of client side index arrays as well?Prehaps you can optionallly pass an array, and NULL to use the indicies in the VAO?

Regards
elFarto

Korval
08-09-2007, 01:25 PM
And there's still certain questions I'd like answered.

Format Objects. Probably the most elusive objects in the whole thing, because we know practically nothing about them.

Are format object specifications sufficient to be able to tell whether a floating-point texture will be able to be bilinearly filtered? That is, if we request a format object for floating-point, can we say that it needs bilinear, and if the implementation can't handle it, it will simply fail to create the format object?

Because that's what I've been hoping for from these objects, and there's never been any confirmation one way or the other on it.


"No state tracking" for built-in uniforms?Well, there's no state for them to track, since all that FF-pipeline stuff is gone in 3.0. And good riddance.

Chris Lux
08-09-2007, 01:39 PM
Originally posted by elFarto:

"No state tracking" for built-in uniforms? Does that mean they are going away?
weren't they supposed to 'be' away in gl3? i thought all this was supposed to be cut from the 'core' functionality and be brought back by the compatibility layer?

Michael Gold
08-09-2007, 02:02 PM
Originally posted by Korval:
Are format object specifications sufficient to be able to tell whether a floating-point texture will be able to be bilinearly filtered? That is, if we request a format object for floating-point, can we say that it needs bilinear, and if the implementation can't handle it, it will simply fail to create the format object?Yes, that is exactly the intent. Its unfortunate we have such non-orthogonalities but this is probably better than pretending it all works and then falling back to software.

Format objects describe internal formats and the corresponding capabilities, including dimensional limits, allowable usage (e.g. texture, render buffer) and other caps such as filtering and blending. They are also used to enforce compatibility of FBO attachments, so there is no need to query completeness - a FBO is always complete if its attachments are populated, and you cannot attach an image with a different format than expected.

Michael Gold
08-09-2007, 02:04 PM
Originally posted by Chris Lux:

Originally posted by elFarto:

"No state tracking" for built-in uniforms? Does that mean they are going away?
weren't they supposed to 'be' away in gl3? i thought all this was supposed to be cut from the 'core' functionality and be brought back by the compatibility layer? Built-in uniforms are gone, and they are not. In order to support existing shaders, built-in uniforms are "pre-declared" but have no value except that which you set. In other words, they work just like user-defined uniforms.

Brolingstanz
08-09-2007, 02:17 PM
Format objects describe internal formats and the corresponding capabilities, including dimensional limits, allowable usage (e.g. texture, render buffer) and other caps such as filtering and blending.Will these new format objects allow for reinterpretation of type, like say from D32 to R32, RGBA16X to RGBA16F, or other such "casts"?

The GLSL updates look sweet.

W. Gerits
08-09-2007, 02:17 PM
I just hope that, at least on windows, there will be a new (standard) non-microsoft controlled library that contains all the new exports and that can just be installed (and updated with newer functions) instead of having to get every single >= 3.0 function via a GetProcAddress.

Even if it's not directly in 'Longs Peak', I think it would be strange to have the 'clean new api' that only runs on DX10 class hardware (Mount Evans) intermixed with all the old legacy stuff.

Overmind
08-09-2007, 02:34 PM
Prehaps you can optionallly pass an array, and NULL to use the indicies in the VAO?That would be a really ugly solution, especially the "optionally" part...

Korval
08-09-2007, 02:49 PM
That would be a really ugly solution, especially the "optionally" part...I imagine that the way it works is just like for glDrawElements now.

If the VAO has an index array bound, then the "sizeiptr" is an offset into that index array to start from. So you don't pass NULL so much as 0. If the VAO has no index array bound, then it reads the indices from the client, and the value is a pointer.

{edit}

I just noticed something. Where are the objects that store the scissor and viewport boxes? Or is that in the context?

Humus
08-09-2007, 06:26 PM
Originally posted by elFarto:
in/out/inout, much clearer than attribute/varying
Hmmm, I'm not sure I like this. Currently the varying declaration is the same for both vertex and fragment shaders. So you can quickly copy'n'paste between the shaders, or better yet, like I just implemented in my framework, declare varyings in a separate shared section in my shader file so I don't need to type it twice. With the new syntax it would no longer be possible to share, plus I can't imagine the number of copy'n'paste errors will go up quite a lot. You forget to change "in" to "out" and vice versa when copying between vertex and fragment shaders.

Another thing I noticed:

common blocks - uniform buffers
common myPerContextData {
uniform mat4 MVP;
uniform mat3 MVIT;
uniform vec4 LightPos[3];
// ONLY uniforms, but...
// no samplers
// no int types
// no bool types
};
I understand the "no samplers" part, but what's up with the "no int types" and "no bool types". Are we not supposed to be able to store integer and bool uniforms in a uniform buffer? Either I'm misunderstanding this or the ARB has done a huge mistake.

Humus
08-09-2007, 06:27 PM
Originally posted by Korval:
[QUOTE]I just noticed something. Where are the objects that store the scissor and viewport boxes? Or is that in the context? I would expect it to work like in DX10, and thus be in the context.

Korval
08-09-2007, 06:41 PM
Are we not supposed to be able to store integer and bool uniforms in a uniform buffer? Either I'm misunderstanding this or the ARB has done a huge mistake.I understand this.

The presumption with buffer uniforms is that the driver can just copy them directly into the place where they go when the program object gets used.

However, for hardware that does not directly support integers and bools, there would need to be a translation step to convert integers/bools into floats. Since part of the point of uniform buffers is to make uploading fast, there's no point in allowing uniform buffers to do this.

Well, until Mt. Evans, which will relax this restriction.

Basically, you'll have to do all the int/bool-to-float conversion yourself.

Rob Barris
08-09-2007, 09:47 PM
Korval is right, and this is the price of supporting GPU's that predate the GeForce 8800.

V-man
08-10-2007, 12:22 AM
Originally posted by Korval:
However, for hardware that does not directly support integers and bools, there would need to be a translation step to convert integers/bools into floats. Since part of the point of uniform buffers is to make uploading fast, there's no point in allowing uniform buffers to do this.
I thought this was supported since SM 2 in VS
and SM 3 can do in VS and FS

jkolb
08-10-2007, 10:11 AM
So what's the difference between dx10 and ogl3? Will there be an advantage (besides cross platform) of using ogl3? Will it still have extensions?

Jeremy

Korval
08-10-2007, 10:20 AM
So what's the difference between dx10 and ogl3?DX10 exposes more features than 3.0. Indeed, 3.0 doesn't expose features at all; it's just an API change. Stated otherwise, anything you can do in 3.0 you could do in 2.1.

Mt Evans is where DX10 features will show up.


Will there be an advantage (besides cross platform) of using ogl3?What, isn't cross platform enough?


Will it still have extensions?Almost assuredly.

jkolb
08-10-2007, 10:40 AM
So what's the difference between dx10 and ogl3? Will there be an advantage (besides cross platform) of using ogl3? Will it still have extensions?

Jeremy

jkolb
08-10-2007, 10:42 AM
Sorry, to clarify: what besides being a cross platform architecture would be an incentive for someone to choose ogl over d3d? Will ogl3 feature things like hardware accelerated which d3d9/10 do not offer?

Lindley
08-10-2007, 11:06 AM
Originally posted by Korval:
[QUOTE]Stated otherwise, anything you can do in 3.0 you could do in 2.1.

You can do render-to-VBO directly in 2.1?

Korval
08-10-2007, 11:08 AM
what besides being a cross platform architecture would be an incentive for someone to choose ogl over d3d?Like I said, isn't that enough?

There's also the fact that it works on all versions of Windows without having to deal with the DX9 limit on XP and the DX10-only Vista.

Brolingstanz
08-10-2007, 11:10 AM
My feeling is that if you have to ask, you probably wouldn't understand the answer. That is to say, if you have to ask how much the Ferrari is, you probably can't afford it.

jkolb
08-10-2007, 11:14 AM
Originally posted by Korval:

what besides being a cross platform architecture would be an incentive for someone to choose ogl over d3d?Like I said, isn't that enough?

There's also the fact that it works on all versions of Windows without having to deal with the DX9 limit on XP and the DX10-only Vista. Well in the past some of the reasons have been:
1. Cross platform
2. Backwards compatible
3. Extensions (nifty new features)
4. HW accelerated lines (d3d9 does not have this)

So with OpenGL 3.0 I'm only seeing 1 as an answer (maybe 3 as well?).

tinyn
08-10-2007, 11:31 AM
Originally posted by jkolb:
So what's the difference between dx10 and ogl3? Will there be an advantage (besides cross platform) of using ogl3? Will it still have extensions?
The hardware is still being driven by Direct X. As long as that is true, OpenGL improvements are basically exposing DirectX functionality to OpenGL. Unless OpenGL becomes the driving force behind the hardware, it will not have major features that DirectX does not, except for cross platform. Which is more than enough.


OpenGL is still backwards compatible. I believe the new OpenGL 3 stuff will require the program request a OpenGL 3 context at creation, while it can still request a old-fashion OpenGL context, and maintain OpenGL 2.1 features.


OpenGL will always have extentions. They are just too useful drop.

Overmind
08-10-2007, 11:35 AM
Certainly 3 as well. It's one of the biggest advantages of OpenGL.

For the next generation of hardware (and with this I mean the next after DX10/Mt.Evans class hardware), there will certainly be extensions. With DX10, it just wouldn't be possible to use these features, you'd have to wait for the next revision of DX.

And with cross-platform recently also meaning Win2k/XP support, this point gains in importance, too ;)

And while being only a minor point, I don't think they will drop hardware accelerated lines in GL3...

Overmind
08-10-2007, 11:39 AM
The hardware is still being driven by Direct X.I don't think that's actually true. It's just that the latest hardware features were exposed by DX10 sooner than by DX9. But it was still DX that exposed features of the hardware, not the other way round ;)

The GF8 was out way before DX10 was released...


it will not have major features that DirectX does notThis will never be the case. The only thing either API can hope for is having the features sooner, but you can pretty much count on the fact that any hardware feature that's ever going to exist is going to be usable with both APIs eventually.

bobvodka
08-10-2007, 12:14 PM
Yes, the GF8 was out before DX10 was released to the public, but the API was designed well in advance and MS basically said 'you will expose this, this and this to be DX10 compliant', the hardware had to obey or risk problems (note the intial DX9 spec, which favoured ATI more than NV and gave birth to the R300).

So, I would say the base feature set is very much driver by MS, anything outside of DX10 just simply won't be worth including on silicon (see the R600's programmable tessalator, which while great and all, is practically a waste of silicon as no one can use it).

Robert Osfield
08-10-2007, 01:19 PM
Originally posted by jkolb:
Sorry, to clarify: what besides being a cross platform architecture would be an incentive for someone to choose ogl over d3d? Will ogl3 feature things like hardware accelerated which d3d9/10 do not offer? I think one should ask this question the other way around. What possible reason would there be for choosing D3D9/10 over OpenGL 2.x/3.0, given the later is exposes equal or more hardware functionality, and runs on all Windows platforms, and all other major desktop/workstation platforms.

The portability issue isn't just a nicety, if you are serious about graphics quality and performance then the rest of the platform is important - how well does the OS handle multiple processors, or multiple graphics cards, how well does the file system perform? The bottom line is you really should be choosing the platform that provides the best overall capabilities for real-time graphics.

There are certainly better alternatives than Windows for doing multi-thread, multi-GPU work and file system intensive applications (think database paging), neither Windows XP and Vista are real contenders in this arena so you are shooting yourself in the foot by choosing an API that only runs under Windows.

Robert.

zeoverlord
08-10-2007, 01:27 PM
Originally posted by bobvodka:
Yes, the GF8 was out before DX10 was released to the public, but the API was designed well in advance and MS basically said 'you will expose this, this and this to be DX10 compliant'. it's the other way around i think, MS has some knowledge of what future generations of hardware can do and what is currently planned and they write the specs accordingly.
Geometry shaders is certainly one of these things, i know that long before DX10 ATI had something cooking with R2VB and nvidia probably had similar plans, it was just one of these naturally evolving things (decided in a meeting held at a unusually long table within a mountain top fortress in the swiss alps).

Often there are things that DX just simply misses, there are certainly things the G80 can do and is exposed in openGL but not in DX(not that i can recall any at this moment).

So no microsoft does not exactly drive hardware, it just kinda evolves in the direction graphics demands.

Korval
08-10-2007, 01:30 PM
note the intial DX9 spec, which favoured ATI more than NV and gave birth to the R300You're looking at it from the wrong direction. ATi was 6 months ahead of nVidia with R300. So Microsoft either had to expose R300 pretty much as it was, or nobody could use ATi's hardware except GL programmers.

The same is true, to a degree, of DX10. Microsoft probably asked nVidia, "So, this G80 thing... what's it going to do?" and then made an API for it.


if you are serious about graphics quality and performance then the rest of the platform is important - how well does the OS handle multiple processors, or multiple graphics cards, how well does the file system perform?If you're programming for yourself, or someone whom you expect to purchase whatever hardware and OS you tell them to, sure.

This, however, is something that very few people can do. Dictating hardware and OS is not something that most people using graphics APIs can actually accomplish.

To me, the main thing is crossing the DX9/Vista gap without having to code to a new API.

knackered
08-10-2007, 01:50 PM
The extension mechanism has given us as GL programmers first dibs on many new hardware features over the years, leaving D3d to play catch up. Even now all dx10 features are available as vendor-specific GL extensions, plus some that simply don't exist in dx10. It's funny that someone's under the impression it's the other way round.
Also, you've got to remember that d3d is much slower for scenes of great complexity, such as engineering scenegraphs full of DCS'. To get reasonable performance from d3d all your data has to be static and/or dramatically pre-processed into the most batch optimal layout. In my business, that makes d3d a non-starter. I used to have 2 renderer implementations, GL and d3d9, but the d3d one performed so poorly with the same data as the GL implementation that it was never used and I stopped maintaining it.

Korval
08-10-2007, 02:20 PM
On the glslang in/out issue:

There is one case to be made for it: Geometry shaders.

When a geometry shader is bound, the semantic concept of "varying" for vertex shaders is wrong. No longer is this value going to vary for the next shader in the pipeline. Semantically, the better value would be "output" from the vertex shader, and "input" to the geometry shader.

PaladinOfKaos
08-10-2007, 02:21 PM
I'd be willing to bet that ATI is holding off on releasing a Tesselator extension until they can write it against GL3. R600 was released in the spring, so why would they write an extension that would be out of date in less than six months? I expect we'll see the tesselator before we see '08.

Seth Hoffert
08-10-2007, 08:26 PM
Originally posted by Lindley:

Originally posted by Korval:
[QUOTE]Stated otherwise, anything you can do in 3.0 you could do in 2.1.

You can do render-to-VBO directly in 2.1? Yes, this can be accomplished via the NV_transform_feedback extension if your card supports it (e.g., NVIDIA's 8800).

-Seth

V-man
08-11-2007, 02:12 AM
I don't think the hw is driven by DX.

If MS dictates features, I can image vendors complaining. "but it would cost too much. "But it would be too slow to be usable"

It's necessary to design hw simulators and see how it would behave, estimate costs. Then doing an API is the easy part.

Komat
08-11-2007, 02:18 AM
Originally posted by Robert Osfield:
What possible reason would there be for choosing D3D9/10 over OpenGL 2.x/3.0, given the later is exposes equal or more hardware functionality, and runs on all Windows platforms, and all other major desktop/workstation platforms.
For example better development tools (e.g. PIX, Nvidia PerfHUD, ATI PerfStudio & Shader Analyzer). Or better quality of drivers from "smaller" IHVs.

xen2
08-11-2007, 05:21 AM
One problem about DX9 on XP vs DX10 on Vista was number of draw call per second since the driver is implemented in userspace in Vista (no limitation) vs kernel space in XP (CPU was stressed by even a low number of draw call, making programmers batching geometry very roughly).

So, I was wondering if OpenGL performance will be able to stay at the same order of "magnitude" on XP. As it seems a nifty feature to have XP support for DX10-class feature, it would be really bad if in the end driver space would limit it (that's why Microsoft was reluctant to port D3D10 on XP I think).

However, I think it's no problem since OpenGL batch commands and issues them rarely in order to reduce context switch, but if someone could confirm this...

V-man
08-11-2007, 06:47 AM
However, I think it's no problem since OpenGL batch commands and issues them rarely in order to reduce context switch, but if someone could confirm this...GL runs in userspace and always has on Windows since Win 95.
It doesn't mean you shouldn't care at all. Keeping a low number of GL calls and state changes is always a good idea.

Using 1 context is best. That's what GPUs were designed for. It's the same issue with D3D : 1 context is best.

bobvodka
08-11-2007, 09:13 AM
I believe xen2 meant 'context' as in 'user to kernel mode switch', which is less a GPU problem and more a CPU/Kernel 'problem'.

Brolingstanz
08-11-2007, 11:49 AM
From the end of the BOF.pdf presentation:

OpenGL Longs Peak Reloaded
- might contain:
- Attribute index offsetting
- Compiled shader readback for caching
- CopySubBuffer to copy data between buffer objects
- Frequency dividers
- Display list like functionality
- 2-3 months after OpenGL 3What's attribute index offsetting?

How mighty is the emphasis on "might"? ;-)

P.S. Reloaded: A peak enshrouded in clouds and mystery, located somewhere between Longs and Evans.

Korval
08-11-2007, 11:59 AM
What's attribute index offsetting?It's that thing that everybody's been asking for but the ARB has been somewhat reluctant to implement for reasons that they have yet to explain. It allows you to apply an offset to each attribute index when using a DrawElements call.

Smokey
08-11-2007, 08:00 PM
I may have missed this, or it may be blazingly obvious and I've just overlooked it...

But the transformation stack that OpenGL implements, is this considered a legacy feature, and will it be removed from OpenGL LP/ME?

If it is to remain (which I'm assuming not?), how would this be exposed with the current object model?

Rob Barris
08-11-2007, 09:15 PM
There is no matrix stack in Longs Peak (GL3).

Chris Lux
08-12-2007, 01:12 AM
Originally posted by Smokey:
If it is to remain (which I'm assuming not?), how would this be exposed with the current object model?the matrix stack has nothing to do with the object model.

you can simply implement your own matrix stack, which i think should be very easy...

Overmind
08-12-2007, 03:03 AM
If it is to remain (which I'm assuming not?), how would this be exposed with the current object model?This is not a separate feature in the new object model. The matrices are just another uniform variable in the shader.

Brolingstanz
08-12-2007, 11:22 AM
Maintains backward compatibility within the shader code while neatly shifting the responsibility to the developer... nice touch, and it's one less thing for the driver guys to worry about.

Lindley
08-13-2007, 11:34 AM
What will be the word on rendering to the same texture you're reading from in OGL3?

Being able to do this would be incredibly useful, but I hear it's supposedly a bad thing right now which doesn't always work.

Korval
08-13-2007, 11:38 AM
What will be the word on rendering to the same texture you're reading from in OGL3?I'm guessing it'll be exactly what it was for 2.1. Reading and writing to the same image and mip-level/array/3D slice/etc is undefined.

It's a hardware issue, not software. And I doubt it'll be going anywhere.

Michael Gold
08-13-2007, 04:27 PM
Originally posted by Lindley:
What will be the word on rendering to the same texture you're reading from in OGL3?Korval is correct. This can only be made 100% reliable if you can guarantee the order in which fragments are processed, and ensure that the rendering of each fragment is complete before processing the next one. I don't expect you'll ever see this supported.

The official position of OpenGL is: we won't throw an error if you do this, but neither will we define the result.

oc2k1
08-13-2007, 06:05 PM
There are two problems:

The first is, that two fragments from two different polygons with same positions are at the same time in the pipline. The first fragment will be overwritten from the second. (A simple multithreading problem with a missing mutex)

The second problem is the texture cache:
The Texture cache won't be updated at writing. If a pixel is rendered a second time, it could be that the cache contains an invalid value.

Intersting is, that both cases cant affect the first overdraw: Things like tonemapping, updating particle systems are working without problems. It would be nice if this is supported officially. In many cases it's possible to replace ping pong buffering.

Another usefull application would be a replacement for depth peeling. For example:
A fragment shader reads 4 layers, insert a new at the right position, and write all 4 layers back to a multiple rendertarget.
On a first view it looks like more efficient than depthpeeling, but we have to transform the geometry one time, 4 buffers to read and 4 Writes.

For depthpeeling we could use the Stream out feature, so that the geometry has to be calculated one time too. We have 4 Renderpasses, but only 1/4 of the data have to be read and written.

Another importand advantages are, that no sorting code in the shader is required and many fragments can be discarded in the 2nd,3rd and 4th pass.
The Depthpeeling can be faster than the fragmenshader with sorting, if the polygon order isn't completely bad.

I would say moving the blending stage into the fragmentshader could be usefull for some algorithms, but it will accelerate only special cases, and makes the GPU design more complex.

Zengar
08-14-2007, 04:09 AM
Hm... I am not sure if I understood it correctly, but are you basically saying that rendering a full-screen sized quad textured with texture A to the same texture is not a problem(provided all depth/stencil tests are disabled)?

oc2k1
08-14-2007, 05:53 AM
Yes but only if no neighbor Texels are accessed. But you have to test it on each card, that you want to support, because officially the result is not defined.

Lindley
08-14-2007, 08:02 AM
Yes, that's more or less the use I was thinking of as well.

Particularly, if a fragment shader only uses the r channel from the texture it's reading, and writes back to the same texture in the g, b, and a channels with glColorMask(GL_FALSE, GL_TRUE, GL_TRUE, GL_TRUE), there's no reason that shouldn't work.

Granted, it's a special case, but it can be useful in some situations.

The goal of most OpenGL improvements is efficiency, and that's really all this is about. If at some future point in a program you're always going to need the r value of a texture, and only sometimes need the other values, then computing the other values is a waste of time where they won't be needed. However, currently the only three supported options are:

-Compute g,b,a everywhere anyway, just to make sure we take the r value along. (Waste of time.)
-Use two textures, one for r's, and one for the other values where needed. (Waste of space.)
-First copy all the Rs to another texture, then do the partial render computing g,b,a. (Waste of time again.)

In an isolated situation, you could just use a 1-channel texture for the r's, and a 3-channel texture for the others, thus (mostly) negating the waste-of-space. But when you've got a set of 10 or so RGBA textures that you're reusing for many different things anyway, adding such special-use textures is still a waste.

Jan
08-14-2007, 10:07 AM
Especially for post-processing effects it can be useful to sample a texel once (nearest neighbor filtering), modify it and write it back into the same texture.

Maybe this could be added as an extension. Cards, that can guarantee correctness for such use-cases, could thus officially allow it.

I would assume, that reading once, writing once, with filtering disabled, should remove any possible race-conditions on most cards, though i am just guessing. But if so, exposing this "functionality" through an extension, would be nice.

Jan.

Korval
08-14-2007, 05:05 PM
Intersting is, that both cases cant affect the first overdraw:The first one does. If you have two fragments competing for the same pixel, it very much does effect what the results are.

Furthermore, you leave out other problems. For example, it is entirely possible that some of the pixel data has not left the blend-stage cache by the time someone starts pretending it's a texture. That's going to cause a problem, since the texture fetch will read from memory (which has old data), not the pixel cache (which has new data).

My point is this: it is left undefined for a reason. I'm really sure that the guys making these decisions know much more about this than you (or I) do.

Zengar
08-14-2007, 06:49 PM
Korval, no it doesn't. If you just render a fullscreen quad, no fragments will compete. It may be true, that it was left undefined for a reason, but it is also a fact that rendering to texture while reading it works perfectly for G7x and G8x.

Korval
08-14-2007, 07:44 PM
but it is also a fact that rendering to texture while reading it works perfectly for G7x and G8x.No, it does not.

It works under certain very specific circumstances (rendering a large quad). But in the general case, no.

And what should the spec say to this? That a piece of hardware should be able to provide the ability to read and write to the same location if you happen to render a full-screen quad? What about when the quad isn't quite full-screen? What if you tessellate the quad?

Trying to specify limitations like that is just crazy. Better to wait for blend shaders.

Lindley
08-14-2007, 10:25 PM
It seems to me that viewing this issue as a traditional "render" is looking at it the wrong way. The intent is simply to do some additional processing on portions of a texture; the fact that the easiest OpenGL method to do this is a full-screen quad render, with or without depth masking of some pixels, is incidental.

The notion of blend shaders does have me curious, but I don't know enough about that idea to say whether it would do what I'm looking for.

The question of how to automatically determine when a one-to-one correspondence between read locations and write locations exists is difficult; probably too much so to be worthwhile, until and unless a dedicated "draw full screen quad (optionally scissored)" function is introduced.

However, it seems like it would be fairly straightforward to take an RGBA texture and, so far as render source/destination is concerned at least, "pretend" it's two seperate textures, each of which contain a nonoverlapping set of channels. I think this is something that an extension would be very appropriate to do.

I don't know much about the assembly that shaders compile to, but it should be possible to send up an error if a shader attempts to read a given channel; and of course, glColorMask already handles writing. Combining those two notions into a simple interface that guarantees behavior would be a good thing.

Chris Lux
08-15-2007, 01:45 AM
Originally posted by Jan:
[QB] Especially for post-processing effects it can be useful to sample a texel once (nearest neighbor filtering), modify it and write it back into the same texture.

Maybe this could be added as an extension. Cards, that can guarantee correctness for such use-cases, could thus officially allow it.yes, i think this could be a EXT_blend_shader extension ;)

i think it is a logical step to go from fixed function blending to programmable. but i do not think the next generation of hardware will bring this. maybe if the demand is there from the gaming corner, but as consoles are the least common denominator i think this will take a while until this is the widespread case.

zeoverlord
08-15-2007, 01:04 PM
I am still rooting for the GF10800 to bring full head on hardware blend shaders, but it really depends on how the hardware is made up, in theory the 8800 (6800?) could possibly support some blend shader functionality through the fragment processors, but it's unlikely they will walk that path.

PaladinOfKaos
08-15-2007, 03:37 PM
Doing programmable anything on G80 or R600 should be relatively easy - just route the date through another set of stream processors, and bypass whatever fixed-function stage is being used.

Now, the hardware logic to do this bypassing for the blend stage might not be 100% complete, thus requiring driver intervention, but blending can be seen like this:
Draw to texture A
Draw to texture B
Blend A + B to texture C
copy texture C to texture A

or, if the hardware supports it:
Draw to texture A
Draw to texture B
blend A + B to texture A


In both cases, it's not that hard to emulate it with FBOs, but flipping FBOs around every time an object gets rendered would be slow. The thing is, most of that slowness is from app->driver->hardware->driver->app jumps, not from the actual logic involved in changing the render target. So having it be automated, even at the driver->hardware->driver level, would give a huge performance boost. The real question is whether or not AMD and nVidia consider it important enough to try implementing it.

Korval
08-15-2007, 04:18 PM
The thing is, most of that slowness is from app->driver->hardware->driver->app jumps, not from the actual logic involved in changing the render target.No, it is not.

Changing an FBO into a texture (or, really, unbinding the FBO at all) requires a hardware stall. It must flush the entire pipeline, flushing the pixel cache (so that it can write everything out to the texture). Only then can any texture accesses (like vertex texture accesses, which are now quite possible) be made from that texture.


So having it be automated, even at the driver->hardware->driver level, would give a huge performance boost.How would you even define an "object"? Even a primitive (triangle-strip/fan, etc) can still be overlapping, which would pose a problem.

Jon Leech (oddhack)
08-15-2007, 09:13 PM
Originally posted by Overmind:
[QB] From the GL3 presentation:

void DrawElements( enum mode, const sizei *count, const sizeiptr *indices, sizei primCount, sizei instanceCount );Why is the index array in client memory? When we get rid of client side vertex arrays, why not get rid of client side index arrays as well?The index array is not in client memory, it's attached to the VBO. For each specified group of elements 0..i, DrawElements will pull count(i) elements whose indices are specified in the index array bound to the VBO, starting at offset indices(i) (sorry, I'd use C notation except that C notation for 'sub i' happens to also be valid UBBcode :-)

indices probably isn't the best name for this parameter, since what it really is, is a list of base offsets within the index array.

PaladinOfKaos
08-15-2007, 10:02 PM
Originally posted by Korval:

The thing is, most of that slowness is from app->driver->hardware->driver->app jumps, not from the actual logic involved in changing the render target.No, it is not.

Changing an FBO into a texture (or, really, unbinding the FBO at all) requires a hardware stall. It must flush the entire pipeline, flushing the pixel cache (so that it can write everything out to the texture). Only then can any texture accesses (like vertex texture accesses, which are now quite possible) be made from that texture.The FBO thing was just to explain how to emulate blend shaders in software - I wasn't very clear on how that translates into an actual implementation.

When a primitive goes through a VS, it doesn't necessarily go straight to a FS - there might not be any execution units available, so it gets buffered somewhere. Same between VS and GS, or GS and FS. That same buffering mechanism can be used post-FS and pre-BS, passing each fragment (or set of fragments) to the blend shader once an execution unit is available to do the processing.


Originally posted by Korval:
How would you even define an "object"? Even a primitive (triangle-strip/fan, etc) can still be overlapping, which would pose a problem. Objects are whatever you want. My blend shader method ignores depth issues, just like the current fixed-functionality blending does. I have no idea how to efficiently implement that on current hardware.

elFarto
08-16-2007, 01:25 AM
Originally posted by Jon Leech (oddhack):
indices probably isn't the best name for this parameter, since what it really is, is a list of base offsets within the index array. It's a very bad name. Try 'offset', 'baseOffset', 'start' or 'startIndex'. I prefer the latter 2, as the first 2 give the impression they are for the long sought after feature of adding a specific value to each index.

Regards
elFarto

Overmind
08-16-2007, 02:27 AM
const sizeiptr *indicesIf this is really an offset, wouldn't it be better to use an integer type (e.g. sizei) instead of the pointer type sizeiptr?

Humus
08-16-2007, 09:03 AM
Originally posted by PaladinOfKaos:
Doing programmable anything on G80 or R600 should be relatively easy - just route the date through another set of stream processors, and bypass whatever fixed-function stage is being used.I think you have an overly optimistic view of the flexibility of hardware.


Originally posted by PaladinOfKaos:
In both cases, it's not that hard to emulate it with FBOs, but flipping FBOs around every time an object gets rendered would be slow.It would be far worse than "slow", because you'd have to repeat this procedure for every triangle. We would probably be talking about minutes per frame in many cases.

Humus
08-16-2007, 09:09 AM
Originally posted by Overmind:

const sizeiptr *indicesIf this is really an offset, wouldn't it be better to use an integer type (e.g. sizei) instead of the pointer type sizeiptr? Actually, sizeiptr is an integer type. It's an integer the size of a native pointer on that system. So on a 32bit system it would be a 32bit integer, and on a 64bit system it would be a 64 bit integer. So this choice makes perfect sense to me.

SLeo
08-16-2007, 09:09 AM
shaders/GPU :)

Overmind
08-16-2007, 12:15 PM
Actually, sizeiptr is an integer type.Ah, ok, I misinterpreted the type name. I thought "sizeiptr" means "sizei*". Of course, if it really means "intptr_t", then it makes sense ;)

Humus
08-16-2007, 02:39 PM
From glext.h:


#ifndef GL_VERSION_1_5
/* GL types for handling large vertex buffer objects */
typedef ptrdiff_t GLintptr;
typedef ptrdiff_t GLsizeiptr;
#endif

Cgor_Cyrosly
08-16-2007, 03:27 PM
Is there a mechanism to solution the problem when a const expression repeat compute in GLSL shader for every vertex or geometry and or fragment?example:when we use cos function for a animation through a uniform avriable with the name of "frame",It while be repeat the value circulation in the range [-1,1].It must be compute repeat every frame for every vertex or whether have constant stack/register save these result for every cycle period.
Truly will not be able again to continue to appear openGL3.x oropenGL4.0 in future?

Cgor_Cyrosly
08-16-2007, 03:31 PM
That will be able to map the GLSL code for assembly code in GLSL 3.0 or whether the "asm" keyword can be used? thanks.

Korval
08-16-2007, 03:53 PM
Is there a mechanism to solution the problem when a const expression repeat compute in GLSL shader for every vertex or geometry and or fragment?example:when we use cos function for a animation through a uniform avriable with the name of "frame",It while be repeat the value circulation in the range [-1,1].It must be compute repeat every frame for every vertex or whether have constant stack/register save these result for every cycle period.Let me make sure I understand what you're asking.

You have a uniform named "frame". You use that uniform to compute another variable using the "cos" function.

I don't understand what it is that you expect glslang to do for you. The uniform "frame" is not constant; it simply happens to not be varying for the current frame. If you want to have a value that only gets updated every frame, you should compute it on the CPU. So you should have two uniforms: "frame" and "cosOfFrame", where you do the math to compute it yourself on the CPU.

No, glslang is not going to figure out that some value happens to be constant right now and precompute it. That's your responsibility.

Cgor_Cyrosly
08-16-2007, 04:17 PM
I means that is when I update the uniform variable on the CPU,e.g,when the frame form [0,2PI] to [2PI,4PI],...in these periods the value of cos(frame) is the same.whether auto to applay a value sequence of a period for every frame area after the first compute.thanks

dorbie
08-16-2007, 09:42 PM
D3D 10 is dead on arrival, because it is tied to Vista and Vista is a failure on par with Windows Me. Today version 9 is the highest revision of DirectX you can get on Windows XP as Microsoft increases it's forecasts for XP sales this year and companies like Dell rapidly retreat from their original Vista everywhere strategies.

Microsoft has overplayed its hand and gotten bitten this round and I have to laugh at anyone who hypes D3D 10 in this environment.

I almost feel sorry for the IHVs who've been completely shafted by Microsoft's manipulations on D3D 10 designs, but then I remind myself that they did it to themselves by allowing Microsoft to dominate the API essential to their own future as they alternately curried favor with the behemoth.

At least they can still use OpenGL to ship advanced features on the largest platform out there for the foreseeable future; Windows XP.

As for the D3D developers, you'd better prioritize that D3D9 code-path for now, that's where your customers will be and of course Vista REQUIRES D3D 9 for the desktop, not version 10. I really have to take my hat off to Microsoft, not even I thought they could stuff things up like this, it would have been SO easy to avoid this, like NOT trying to artificially leverage the future of their graphics platform to sell their flawed operating system. But they did it, it's almost reminiscent of the time they shafted hardware designers, developers and their own customers by yanking the MCD.

Perhaps the IHVs will finally understand the position they are in and take control of their own future. Getting used as lubricant just so Microsoft can increase Vista penetration can't be much fun, but just remember NVIDIA (et.al.), you volunteered for this.

wizard
08-16-2007, 11:55 PM
On the spot there dorbie. There's this general opinnion that one isn't better than the other but in the light of the MS direction on there times might be a changing :) But we may hope they learn from this...

Komat
08-17-2007, 01:20 AM
Originally posted by Korval:
You have a uniform named "frame". You use that uniform to compute another variable using the "cos" function.

I don't understand what it is that you expect glslang to do for you. From the description it looks to me like he expects something similar to preshaders (http://msdn2.microsoft.com/en-us/library/bb206299.aspx#PreShaders_Improve_Performance) from DX effect framework. When they are used, the shader compiler will pull out calculations which depend only on uniform values (e.g. the cos(frame)) from the GPU shader and generate CPU code which will calculate them into new uniform values (e.g. uniform float cos_frame ) which are used by the GPU shader instead of the calculation. The CPU code is run during draw call before the GPU shader is executed.

Korval
08-17-2007, 02:18 AM
Vista is a failure on par with Windows Me.It most certainly is not.

According to the Valve survey, approximately 5% of Valve gamers have Vista. That's a huge amount (particularly for gamers, who don't tend to upgrade their OS unless they must), considering that it's been out for less than a year. And that it's the first revision of a Microsoft OS.

Nobody was terribly keen on getting XP before Service Pack 1 either. Adoption rates were initially rather slow. As time progressed however, and the various SP's for XP were released, and the advantages of XP were made both apparent and obvious, people switched when the opportunity presented itself.


like NOT trying to artificially leverage the future of their graphics platform to sell their flawed operating system.Far be it from me to defend Direct3D, but your facts are grossly in error.

First, D3D 10 isn't just D3D 9 with more stuff. Microsoft made fundamental changes in the very nature of its driver architecture. And they made them for very good reasons. Vista has a very good graphics driver architecture, while XP's graphics driver architecture was basically no different than whatever Win95 was doing.

So there was nothing artificial about the restriction of D3D 10 drivers to Vista. Now obviously, the featureset of D3D 10 could have been backported to D3D 9 (or a version of the D3D10 API could run on XP). And it is just as obvious that Microsoft ought to have done that. But the drivers themselves could not have been made to run on XP.

Second, Vista is no more flawed than XP. Indeed, it's a lot less flawed. Yes, it's slower, but that's on mid-grade hardware. Drop in 4GB of RAM, and you'll find that the Vista 4GB machine runs rings around the XP 4GB machine. Vista was designed to be a great OS for the average computer in 2009. Which makes it an OK OS for the average computer in 2007. Personally, I'm glad that Vista will be waiting for me when the time comes to adopt it.

Does it have bugs? Of course it does. That's what service packs are for.


Perhaps the IHVs will finally understand the position they are in and take control of their own future.I don't know what IHVs have to do with any of this. They were a third party to what was going on. A Vista graphics driver requires X (in this case, D3D 10 support). What, did you expect them to simply not offer D3D 10 support? That'd be silly and stupid.

The only valid complaint you have is that D3D 10-specific hardware features should be available on XP through D3D. Anything else is a combination of bile and misinformation.

Zengar
08-17-2007, 02:44 AM
Must agree with Korval on this. I use Vista since it appeared, and this is by far the best OS I ever seen. Still it is true, it may take some time till everyone switched from xp. I believe that GL3.x has very good chances to become very popular, and I hope the release date won't be too late. IMHO, another big advantager over D3D is the clearer and easier to use API.

Jan
08-17-2007, 03:01 AM
Have you actually tried Vista? The user interface is a mess, totally inconsistent. It gave me headaches navigating through this completely idiotic system. For a simple user, their might not be that much of a difference. For an administrator (and we all are), who tries to set it up, it's a nightmare.

Even Linux was more intuitive to me and i'm a hardcore XP user.

Jan.

Zengar
08-17-2007, 03:04 AM
I have no idea what are you talking about. The user interface is very well done IMO, when compared to XP. The search function is a life saver.

zed
08-17-2007, 03:14 AM
now of course if www.opengl.org (http://www.opengl.org) had a (nongle'my pick'/lounge/offtopic) forum this part could be shifted there without disrupting this topic (btw good to see you here as well wizard 2 posts in one night, ild say once in a blue moon but of course this is a rarer event)

Zengar
08-17-2007, 04:24 AM
It is true, the forum badly needs restructuring...

k_szczech
08-17-2007, 04:24 AM
Yeah, Zed is right. Whenever there's discussion over OpenGL 3.x there will always be the same discussion DX vs GL, XP vs Vista, MS vs free world, and so on.
When ARB members started this topic I believe they hoped for constructive discussion over OpenGL API. What they get is mostly offtopic.
Think for a moment how long would we have to wait for OpenGL 3.x if ARB meetings would look like this?
So, let's discuss OpenGL 3.x from now on, shall we? :)
I'm not going to be the part of this discussion anytime soon since I had no time to read anything about GL 3.x recently :( - my current job is an overkill.

wizard
08-17-2007, 05:03 AM
Korval: The driver architecture doesn't actually have much to do with D3D10. D3D is still just an API. The graphics memory is virtualized even for OpenGL drivers (which is the biggest gain + user mode drivers). I too like the Vista driver architecture and Vista in general. Thus I have to take a bit back of my ranting comment on there.

zed: I'm gaining on you now ;)

dorbie
08-17-2007, 09:47 AM
Korval, you offer opinion and say my facts are in error then make some silly remark about DX10 being radically different. That does not fundamentally contradict anything I said.

Why don't you go ask NVIDIA what they think of the situation. Or ship a D3D10 exclusive title and get back to us on the sales numbers.

The reason I posted my comment was the DirectX 10 comments that this thread attracted. The fact is that the D3D 10 situation is an unnecessary and intentionally inflicted disaster for graphics card makers and graphics software developers. It makes the case for OpenGL like no other incident in recent memory.

They've all had their wagon hitched to Vista for no sound reason and with absolutely no choice in the matter, and the only way around this is OpenGL.

Jan
08-17-2007, 10:56 AM
k_szczech: I'd LOVE to discuss the OpenGL 3.0 API with you. Give me a spec and i will discuss it to death!

In the meantime i really don't bother what this thread is actually about.

Jan.

Korval
08-17-2007, 11:04 AM
From the description it looks to me like he expects something similar to preshaders from DX effect framework.Well, there is the glFX framework, due to arrive sometime in 2008. Which I should probably comment on, since the rest of this post is about D3D 10 and Vista.

Having looked at the glFX slides, I have to say that this is almost certainly the first FX framework that I have ever even considered wanting to use. The reason: no file format.

Most FX frameworks are predicated on some kind of file. They really like them for some reason. Oh, you might be able to back-end file creation and hook into the API manually. But rarely is the API actually well-designed for this sort of thing; usually, you'd be better off creating a text file in memory and letting it parse it itself.

Now, we actually have an FX framework designed for runtime use. So I'm no longer "encouraged" to use their files. Which means I get to use my shaders and programs the way I want to, with a minimum of interference. This is good.


The fact is that the D3D 10 situation is an unnecessary and intentionally inflicted disaster for graphics card makers and graphics software developers.And this is why I bothered replying to your hate-filled vitrol to begin with, throwing the thread off-topic.

If I can prevent even one person from buying into this ridiculous line of thinking, then it was worth my time.

The only thing IHVs have to deal with is a new driver model. Hardly an insurmountable challenge. Unless you can show me actual proof that the new Vista graphics driver model is actually bad, or that the D3D 10 API is actually sufficiently painful to implement that it causes IHVs significant difficulties, I will consider your commentary in this direction to be nothing more than mindless Microsoft/Vista hate.

As for game developers, yes, they have to develop for D3D9 and D3D10 as entirely separate entities. Then again, if they're doing cross-platform development, they'll have to throw OpenGL onto the pile too. Having to develop for two API's isn't the end of the world. Console developers have it much worse when porting between 360 and PS3. They not only have to deal with two different APIs, they have to deal with two different CPUs that want to work in two entirely separate ways.

In short, it's nothing game developers haven't done before. Unnecessary complexity? Sure. But that's the way it works. And it's hardly an onerous burden.

So unless you have some real proof that splitting the API like this actually qualifies in a reasonable person's rubric as a "disaster", I will consider your commentary on the subject as overblown rhetoric from someone with a preconceived notion to dislike the product from day 1.

Leadwerks
08-17-2007, 11:20 AM
I agree that recent developments with DirectX are going to make OpenGL 3 much more appealing to developers over the next year or so. I believe we will see a shift away from DirectX back to OpenGL, though I can't say just how strong the effect will be. To be blunt, I think this is more due to MS screwing up than OpenGL doing anything new.

The recent announcement that all DX 10 hardware is incompatible with DX 10.1 is what really drives it home. At first I thought, "Oh DX 10.1 isn't going to be much different, it's not a big deal". But the important point is current DX10 hardware is incompatible with all future developments of the DirectX API. Any graphics card a consumer buys right now is forever stuck in 2007 technology, as far as DirectX goes. And DX10 was sold as a major upgrade in hardware & software that would be a little expensive, but worth it, because this was something new and different that everyone needed to upgrade to.

I'm still not clear on exactly what OpenGL 3 adds to our toolset. It's good that outdated ideas like vertex arrays are getting culled. Maybe it means ATI will get their act together and produce a set of working drivers. I also trust the design has been well thought-out to handle graphics in 2007 and onwards. It's amazing that the API remained mostly the same since 1992.

I am very happy about the news, but I still don't see what OpenGL 3 adds for us.

dorbie
08-17-2007, 11:59 AM
The fact is that the D3D 10 situation is an unnecessary and intentionally inflicted disaster for graphics card makers and graphics software developers.And this is why I bothered replying to your hate-filled vitrol to begin with, throwing the thread off-topic.

If I can prevent even one person from buying into this ridiculous line of thinking, then it was worth my time.

The only thing IHVs have to deal with is a new driver model. Hardly an insurmountable challenge.Nonsense, DX10 was already under discussion and don't call the kettle black.

You've completely missed the point, it's not about the specifics of the driver model, it's about locking DX10 to the next generation of operating system and end of living XP as a cutting edge graphics platform.

Any fool can see what is going on here, it's as plain as the nose on your face. As for the merits of Vista, the market has already decided.

Instead of posting more nonsense, why don't you go ask NVIDIA what they think of being locked to Vista for D3D 10. Seriously, stop posting rubbish and go ask them.

Korval
08-17-2007, 12:56 PM
I am very happy about the news, but I still don't see what OpenGL 3 adds for us.In terms of core hardware features, it adds nothing. Then again, that was never its purpose.

The purpose of it was to get OpenGL back to its roots as a hardware library. The purpose was to make a new API that is much easier to write functioning drivers for (well, except for glslang). The purpose was to make a new API that makes it impossible to find the suboptimal path.

Think about image format objects. Now, you can actually tell if the hardware can let you make a bilinearly filtered floating-point image. Either the format will be created or it won't.

Something similar goes for vertex array objects. If the hardware doesn't support unnormalized unsigned shorts as a vertex format, then the VAO won't be created.

What you get is the absolute certain knowledge that you are on the fast path. That the hardware is there to do what you asked.

I think that's pretty worthwhile.


Seriously, stop posting rubbish and go ask them.Please. They would reply with a PR-based response, "We are happy with Microsoft and Vista... yadayadayada".

Furthermore, you're the one making the ludicrous claims. Why don't you ask them and tell us what they say?

Brolingstanz
08-17-2007, 12:59 PM
Somtimes I really long for the simplicity of Pong.

Komat
08-17-2007, 02:01 PM
Originally posted by Leadwerks:

The recent announcement that all DX 10 hardware is incompatible with DX 10.1 is what really drives it home.
The DX 10 hardware can be used by DX 10.1 API, only some new features will be not available because the hw does not support them. This will be almost certainly the same for future DX versions coming during next few years.

wizard
08-17-2007, 02:35 PM
First of all. Please, Korval and dorbie, it's no use arguing about this.

Secondly. Komat: Do you know whether they are taking the same path with 10.1 that they said they're taking with 10, which is that the hardware has to match a baseline of the API capabilities? The idea behind this was that developers could expect that the system supports the whole set of capabillities of the API (excluding a few minor things like some color formats) without having to do a huge lot of tests. If this is the case I can actually see 10.1 (or atleast 11) not supporting "old" hardware... But then, I'm not a DX decision maker ;)

bobvodka
08-17-2007, 02:39 PM
Originally posted by Leadwerks:

The recent announcement that all DX 10 hardware is incompatible with DX 10.1 is what really drives it home.
Which was always the plan, and has been since before DX10 was released.

Each DX release from here on out makes garrentees about a set of core functionality that the hardware can support, begining with DX10 which sets the benchmark. DX10.1 adds a few extra features to the mix, DX10.2 will do the same and so on.

API wise, nothing changes (afaik, there might be some minor changes I guess), all it gives you is a garrentee that certain features are there if you detect a DX10.1 driver. Consider it a more enforced caps-bits than from DX9.

So, games will still be DX10, it's just with DX10.1 and onwards certain features can be used which doesn't exist in non-DX10.x hardware. The amount of shouting about this you'd think the sky was falling or something... :rolleyes:

MZ
08-17-2007, 03:12 PM
Originally posted by Korval:
First, D3D 10 isn't just D3D 9 with more stuff. No more, no less than D3D 9 is just D3D 8 with more stuff.

Microsoft made fundamental changes in the very nature of its driver architecture.Why should I care? 3D graphics is not about "driver architectures". You're just echoing Vista marketing messages.

So there was nothing artificial about the restriction of D3D 10 drivers to VistaWho cares about the drivers? W2K drivers didn't work with W95 either. I can't remember anyone complaining about that. API is all that matters.

Now obviously, the featureset of D3D 10 could have been backported to D3D 9It should have not been "backported".
It should have been developed from the beginning to work on both XP and Vista. With D3D9 it succedded. There are no excuses for D3D10.

MZ
08-17-2007, 03:15 PM
Originally posted by Komat:

Originally posted by Korval:
You have a uniform named "frame". You use that uniform to compute another variable using the "cos" function.

I don't understand what it is that you expect glslang to do for you. From the description it looks to me like he expects something similar to preshaders (http://msdn2.microsoft.com/en-us/library/bb206299.aspx#PreShaders_Improve_Performance) from DX effect framework. Just what I wanted to say.

Originally posted by Korval:
No, glslang is not going to figure out that some value happens to be constant right now and precompute it. That's your responsibility.Says who? This may be true for the glslang in your parallel universe, but not for the glslang I know. Here, we don't have "Thou shalt not employ preshader in your GLSL implementation" law in our spec.

Leadwerks
08-17-2007, 03:21 PM
The headlines about "All DX hardware is now obsolete" did seem a bit exaggeratory. Thank you for explaining the situation.

I guess Shader Model 4.0 is really the technology to be excited about, and OpenGL 3 is just a clean-up that will ensure better drivers in the future.

Sounds good to me.

dorbie
08-17-2007, 03:24 PM
korval, what ludicrous claims? The facts are there for anyone to see.

Fact: D3D 10 is Vista only.

Fact: Windows XP is now and is projected to remain the dominant OS with Microsoft increasing sales projections for XP as Dell et.al. reintroduce the OS due to massive demand.

Fact: Vista minimum requirements are actually D3D 9

Fact: NVIDIA et.al. have been forced into this situation by our favorite OS monopoly player, they'd love to have the option of leveraging D3D 10 on XP.

You know it was bad enough when D3D was Microsoft specific, but now D3D 10 is Vista specific, but by far the most pernicious aspect of this is Microsoft's use of their driver control to restrict new functionality to Vista in an attempt to drive customers to Vista at the expense of 3D hardware and software developers.

The only surprising thing about this is that ANYONE would be oblivious to what's going on.

If you take issue with these facts then state with specificity where you disagree.

When I say ask them I don't mean call up a spin-doctor in their pr department.

Komat
08-17-2007, 03:34 PM
Originally posted by wizard:
Do you know whether they are taking the same path with 10.1 that they said they're taking with 10, which is that the hardware has to match a baseline of the API capabilities?
Yes. In 10.1 API there are feature levels (currently 10.0 and 10.1 as expected) which you will request during device creation and which give you guarantees on supported features.



If this is the case I can actually see 10.1 (or atleast 11) not supporting "old" hardware.There are two versions. One is version of the API and one is the feature level.

Obviously 10.0 level hw can not support 10.1 feature level so if the application requires it, it will not run. This is the same as application requiring some feature (e.g. Shader model 3) in DX9.

The other thing is that the application written using the DX10.1 API can use DX10 hw. It only needs to avoid using unsupported parts of the API. This is important difference from the DX9 to DX10 boundary where the DX10 API can not use DX9 hw.

Komat
08-17-2007, 03:52 PM
Originally posted by MZ:
No more, no less than D3D 9 is just D3D 8 with more stuff.
Yes and OGL3 is just little modified OGL2. DX10 is significant change in approach to the DX, just as OGL3 is significant change in approach to OGL.



Why should I care? 3D graphics is not about "driver architectures".
3D graphics is about performance and efficiency and the driver architecture is important part of that.



It should have not been "backported".
It should have been developed from the beginning to work on both XP and Vista. And possibly sacrifice performance or features of the new api to have compatibility with both systems. Additionally no one is paying MS to do such big improvements to 6 years old OS so why it would do that? They are not a charity.

Korval
08-17-2007, 04:43 PM
And possibly sacrifice performance or features of the new api to have compatibility with both systems.It's only performance. Obviously the features work in XP, as GL 2.1 has extensions to expose them all. But the D3D 9 API still runs under the WinXP driver model, which has certain issues in it.


Windows XP is now and is projected to remain the dominant OS with Microsoft increasing sales projections for XP as Dell et.al. reintroduce the OS due to massive demand.For how long? A year? 2?

Nobody expected Vista to appear and inside of 9 months own the world. Not even Microsoft. It took 3 years before XP topped 98 as the primary Windows OS. Vista is underperforming (at worst). Calling it a failure is hyperbolic.


NVIDIA et.al. have been forced into this situation by our favorite OS monopoly player, they'd love to have the option of leveraging D3D 10 on XP.You haven't explained what this mysterious "situation" is. What situation? In what way does this harm them?

bobvodka
08-17-2007, 05:58 PM
I would argue alot of Vista's sales under-performance is down to alot of arm waving and 'the sky is falling' by various tech-reviews ("wah! it's not like XP wah!") and the belief that Vista NEEDS SP1 to be useable; someone said pretty much that on a forum, which surprised me as I'd been using Vista as my primary OS for about 4 months at that point and now it is my ONLY OS.

(and for the record I'm not some n00b to all this, I've used 95,95,ME,2K and XP, all pretty much from launch, Vista has been the nicest of all for me in that time)

I'm done derailing the thread for now :D

dorbie
08-17-2007, 06:35 PM
korval,

In saying I have not explained the situation NVIDIA et.al are in & in pretending they are not harmed you are either pretending to be oblivious to what I have written or your powers of comprehension exclude you from the discussion. Either way you've just shot your credibility.

The period of time IHVs have to endure their role as Mircosoft's lubricant for Vista market penetration is hardly the issue. They are being used and are suffering for it. Hopefully they will learn the lesson.

Next time they work for a couple of years on a hardware design hopefully they won't be artificially feature restricted to a disappointing OS launch.

dorbie
08-17-2007, 06:43 PM
(and for the record I'm not some n00b to all this, I've used 95,95,ME,2K and XP, all pretty much from launch, Vista has been the nicest of all for me in that time)

Your Vista experience is absolutely irrelevant to IHVs being feature locked to a launching OS because the company that sells that OS controls the API used to expose those features.

Even if a poster to this thread would like to pretend otherwise, hardware makers do not appreciate the feature set of their next generation of high end graphics hardware design getting exclusively tied to a particular operating system release with limited uptake for no compelling reason, other than Microsoft's bottom line.

Unfortunately that is where they find themselves today.

bobvodka
08-17-2007, 07:08 PM
That's good, because I wasn't commenting on any such thing; instead I was commenting about it's sales performance as referenced in Korval's post before, Honestly I thought the opening 8 words would have given that away, however apprently saying 'sales under-performance' wasn't clear enough, you have my applogises and I will try to be clearer in future.

dorbie
08-17-2007, 07:21 PM
No, no, allow me to apologize for assuming your post was at least peripherally related to OpenGL.

Korval
08-17-2007, 07:31 PM
They are being used and are suffering for it.In what way?

Because "nobody" is using their advanced hardware? Of course not. Do you really think that sales of GeForce 8xxx's are in any way tied to DX10 or Vista?

The Radeon 9700 didn't give ATi a tremendous advantage in the marketplace because it was the first DirectX 9 card. It gave them a tremendous advantage in the marketplace because it absolutely murdered nVidia's best (at the time) almost 2:1 in performance. Performance on Direct3D 8 games.

Gamers ultimately care about one thing: how fast does it go. G80-class hardware goes faster, for the equivalent money, than most other hardware. It will be sold more than other hardware, regardless of the linking of Direct3D 10's featureset to Vista. It will be so because these cards are simply faster than previous generation's cards.

So, while I don't imagine that IHVs are enamored with Microsoft's decision, they are also far from up in arms over it. They're not losing sales because of it. They're not losing developers because of it. So there's no significant harm.


No, no, allow me to apologize for assuming your post was at least peripherally related to OpenGL.Do I really need to quote the relevant parts of your prior posts where you formally introduced the concept of Vista being "flawed" and a failure on the order of Windows ME? That made it a material point of discussion, allowing evidence (anecdotally or otherwise) to be brought to counter your baseless assertion.

You brought any off-topicness to the thread, so you have no right to be sarcastic about someone continuing the discussion you created.

dorbie
08-17-2007, 08:06 PM
Korval, instead of talking crap why don't you try discussing it with the IHVs, ohh, say an NVIDIA employee or two? Oh wait, by your own admission you have no contacts there who won't blow marketing smoke up your ass. You really don't know what the heck you are talking about.

I'm not interested in your opinion on what IHVs think of D3D 10, you've shown what that's worth. D3D 10 is important to IHVs that's why they spent 2-3 years and $100M's investing in it. It is telling that your one sound argument is that D3D 10 features are irrelevant. I'll only concede that it is rapidly being made irrelevant by Microsoft's bungling, which is MY original point.

As for my contributions to this thread they are directly related to and relevant to graphics APIs. When I made the same assumption about bobvodka's intent, he not I claimed his post wasn't.

The windows Vista launch has been a massive disappointment and this has a direct impact on D3D 10's viability as an API, you can further illustrate how disconnected you are and argue against this, or you can learn something and understand that the Vista debacle and D3D10 lockout on XP is hurting companies like NVIDIA and it didn't have to be this way.

Korval
08-17-2007, 10:05 PM
Korval, instead of talking crap why don't you try discussing it with the IHVs, ohh, say an NVIDIA employee or two?Because I'm not the one making the claim.

You're basically claiming that nVidia and ATi (though I've noticed that you pretend they don't actually exist) are being brutally reamed by Microsoft's decision to only support D3D 10 features in Vista. This claim requires support.

Not, "Hey, I'm an insider, I've talked to some people, so I know what I'm talking about." That's not actual support. That's just a statement. You fail to state who these people are, what they do, and what department they represent. More importantly, you don't even provide their statements.

No publicly available information supports your assertion that this situation is significantly bad for IHVs. So, either you can provide support for your position, or your position stands as an unsupported assertion that goes in contravention of established facts in evidence.

It is not the job of the one side of a debate to find information to support the other side. If you cannot or will not provide actual support for your claim, that is your choice. It makes your claim baseless. But continuing to suggest that your claim is more factual than any other person who has a keyboard and can therefore claim to have inside knowledge is of no value to this forum.

Brolingstanz
08-17-2007, 11:32 PM
Perhaps if the IHVs were left to their own devices they'd prefer to compete on features rather than price/performance, which again might lead to a new batch of semi-related extensions from each house and more work for developers to implement several code paths, as opposed to a guaranteed feature set in Mt. Evans. Now I know from frequenting these boards that most folks would sooner volunteer for shoveling horse**** than string that banjo ;-)

Whatever one may think of Microsoft, their position of influence has lead to a pretty standard feature set for the newest hw, and it's not just for Vista. With the G80, R600 and the arrival of Mount Evans, we'll all be able to enjoy the allness of some mighty fine hardware and one hell of a graphics API (not that we don't have that already).

V-man
08-18-2007, 12:43 AM
Originally posted by dorbie:
Vista debacle and D3D10 lockout on XP is hurting companies like NVIDIA and it didn't have to be this way. [/QB]I bet there has been lots of argument between these companies and MS about this decision.
Blame it on game companies for sticking with DX. They are the ones kissing MS's ass

wizard
08-18-2007, 01:35 AM
Originally posted by V-man:

Originally posted by dorbie:
Vista debacle and D3D10 lockout on XP is hurting companies like NVIDIA and it didn't have to be this way. I bet there has been lots of argument between these companies and MS about this decision.
Blame it on game companies for sticking with DX. They are the ones kissing MS's ass [/QB]Now there's a good comment :) How successfull an API is really depends on it's usage. MS has had such good learning resources associated with DX that it's hard to beat. But to bring the discussion more on track, we're seeing this happening with OGL as well with the SDK and all. I really have high hopes that OGL will see much broader use in the future (other than CAD and DCC). Also GL3 caters to people who are used to OO style programming (which most of us should), which is a good selling point.

zed
08-18-2007, 01:44 AM
It took 3 years before XP topped 98 as the primary Windows OS3 years!!! i didnt think it took that long but whatever, the scary thing (for some) is users had a extremely valid reason for migrating from win9x->winxp. stability.
the benefits of upgrading from winxp->vista are far far less. d3d10 is one artificially induced benefit.
now if it took 3 years to go from 98->xp im guessing itll take at least 5 to change over to vista (+ by then a newer windows will be out, with d3d11/12 which perhaps wont be backward compatible)

after the sales fiasco of halo2 for the pc (in case u didnt hear, it bombed). Remember MS hyped it to buggery a year or so ago as a vista only title, im sure many game companies around the world have reevaluating d3d10 only titles, now if only we could convince them of the value of gl.


Blame it on game companies for sticking with DX. They are the ones kissing MS's assit all comes down to image + having that d3d10 sticker, gl needs to market itself the same way

dorbie
08-18-2007, 01:45 AM
korval, I'm done with you. Your position is rather silly and there's enough well establised facts for all to see. But keep offering your opinion in defiance of them if it puffs your ego.

V-man, in the sense that the face argues with the jack-boot yes, lots of argument :-).

The issue is one of leadership, yes the developers followed MS lead, but largely because the people who had the absolute capacity to take control of the interface abdicated, even when they were doing most of the heavy lifting anyway. Now they're paying the price.

Zengar
08-18-2007, 03:49 AM
Originally posted by Korval:

Because "nobody" is using their advanced hardware? Of course not. Do you really think that sales of GeForce 8xxx's are in any way tied to DX10 or Vista?
Yes, it actually is. Not for 8800, their raw performance is strong enough, but for 8600 (and lesser) series, which need new API to show their advantages.

Jan
08-18-2007, 03:55 AM
If the two egos continue arguing like 5-year olds, i'd suggest to close this thread.

Dorbie, you are a moderator and you were elected for it for good reasons, a few years ago. Maybe you should take a break from this nonsense and start acting like a grown-up again.

Jan.

Cgor_Cyrosly
08-18-2007, 08:19 AM
Whether the physical accelerate function added to OpenGL 3.0?Or the ARB best get a new API with the name "Open PL"(Open Physical Libary)^-^

Jan
08-18-2007, 10:28 AM
You confuse OpenGL with things like DirectX. DirectX is a set of APIs, where Direct3D is the graphics API. OpenGL is only one API, a graphics API. It is not a set of APIs for game-development. It has never been the intent to do physics with OpenGL and will never be.

If you want to have a physics library, there are plenty out there, most are open-source or at least free to use. Use PhysX, if you need a full blown commercial API (it's free of charges, too).

Jan.

Korval
08-18-2007, 12:04 PM
im sure many game companies around the world have reevaluating d3d10 only titlesI seriously doubt they're reevaluating that. Only because I seriously doubt any of them actually seriously considered making Vista-only games anytime for the next 3-4 years.

Oh, they'd use D3D 10, sure. They'd have a D3D 10 version of their build and all. But a Vista-only game? That was stupidity even before Vista shipped and underperformed.


Yes, it actually is. Not for 8800, their raw performance is strong enough, but for 8600 (and lesser) series, which need new API to show their advantages.Not really. According to Anandtech (http://www.anandtech.com/video/showdoc.aspx?i=3029&p=1) , basically every card that isn't an 8800 sucks at D3D10 performance. Microsoft certainly did nothing to cause this to come into being; the fact that D3D10 is limited to Vista doesn't cause nVidia and ATi's hardware to not be terribly performant at using D3D10 features.

So really, it's the IHVs fault for making cards that are neither terribly good at D3D10 features and not terribly more performant than the previous generation's cards.

pudman
08-18-2007, 12:44 PM
The windows Vista launch has been a massive disappointment and this has a direct impact on D3D 10's viability as an API, you can further illustrate how disconnected you are and argue against this, or you can learn something and understand that the Vista debacle and D3D10 lockout on XP is hurting companies like NVIDIA and it didn't have to be this way.Two comments, 1) Vista's launch doesn't affect d3d10's 'viability as an API' and 2) d3d10 unavailability on XP does *not* hurt IHVs. Why?

1) I believe you're misinterpreting d3d10's 'viability' with widespread use. Sure there won't be as many d3d10-only applications but not because developers would shun the API, but because there's no pressing incentive to do so. The difference I imply is that d3d10 could be the best API ever (highly viable) but in the immediate future not highly useful due to proliferation.

The benefit is the potential increased adoption gl for those who wish to utilize advances features w/o regard to which OS they use. For that reason I'm not sure why you're so adamant about continuing this particular part of the discussion. What's the point?

2) IHV's could care less about which API is dominant as long as people buy their product. Sure an IHV have to spend considerable effort developing drivers for a new OS but they do so to broaden their products' acceptance. By supporting d3d10 they can stamp their products with a 'look at me I'm new and cool' marketing bullet. I fail to see the harm/downside to IHVs.


The issue is one of leadership, yes the developers followed MS lead, but largely because the people who had the absolute capacity to take control of the interface abdicated, even when they were doing most of the heavy lifting anyway. Now they're paying the price.I'm not sure if you shifted focus here. Are you meaning the general developer populous or IHV developers? From a general developer's standpoint I can see it as a pain in the ass to have to develop to d3d9 and potentially d3d10 to take advantage of certain features. Kudos to the developers that chose gl during these times. But who exactly is paying the price and how?

I think you're seeing more controversy than actually exists.

Korval: I'm impressed. You sound like a lawyer.

zed
08-18-2007, 01:19 PM
I seriously doubt they're reevaluating that. Only because I seriously doubt any of them actually seriously considered making Vista-only games anytime for the next 3-4 years.halo2 + shadowrun is vista only (true MS published)
but theres also alan wake that is vista only.
perhaps others?

scary stat from valve
DirectX10 Systems (Vista with DirectX10 GPU) - 2.22% of users

2.22%!!! now these i assume are mostly hardcore gamers

personally only LongsPeak interests me, cause of the backwards compatibility this IMO is the single most important factor.
OTOH mount evans or d3d10 whilst great for experimenting/demos etc still wont be commercially valid until 2010 at least

oc2k1
08-18-2007, 05:35 PM
There is only one question: How long is the time to plan and produce a game? Often it's much more than a year.

For the developers is the speed more or less important. Important are only common available features. Exactly that is the current problem: OpenGL 2.x doesn't include the same feature set like DX9 and many extensions aren't supported although the hardware support that.

OpenGL 3.0 will require completely new written drivers, so the unexpressed quote won't be a problem in future:
"Why should we do anything for OpenGL drivers? All current games are working..."

For Nvidia cards Opengl3.0 will be an API cleanup, but on ATI/AMD cards there is a little hope to get more than 0.1 FPS in future :p

The Valve statistic is for developers completely unimportant: In one case a project is in the starting phase and all common available features could be used, or a project is near complete and the gamers have the needed hardware.

Please stop the DX10 vs DX9 and Vista vs XP flame wars.

Humus
08-18-2007, 06:07 PM
Originally posted by dorbie:
korval, I'm done with you. Your position is rather silly and there's enough well establised facts for all to see.dorbie, I understand you feel strongly about this, but right now it's not Korval that appears silly. You're a respected member here so I'm rather surprised to see you rant like this.

Now speaking as an IHV employee (at least for one more week) I can say that there's hardly any consensus about the positives and negatives of Vista, D3D10 and Microsoft. You can find opinions from all over the spectrum, everything from "yay, one driver less to write" to "OMFG, Microsoft is artificially tying our products to their broken OS!!!111oneone". Personally I'm not bothered much by their decision to tie D3D10 to Vista. D3D10 was supposed to be a clean start anyway, so if they clean start it on the OS side too then that's fair deal to me. My only concern is where DX11 might be heading. If DX11 still don't have any caps bits and don't allow DX10 cards to run on it, that's when I'd start protesting. Because that would mean the current transition pain isn't a one time issue, but a continous one (and I suspect most developers, even though they might dislike caps bits, probably prefer caps bits to separate renderers).

Korval
08-18-2007, 09:10 PM
halo2 + shadowrun is vista only (true MS published)As you point out, they're both owned by Microsoft.


but theres also alan wake that is vista only.Also Microsoft published.

There are precious few non-Microsoft published games that are Vista only.


OpenGL 3.0 will require completely new written driversNot really. Under the hood, 2.1 drivers are accessing an internally-designed interface (the same one that D3D drivers accesses) to talk to the hardware. That part doesn't need to be rewritten.

The primary thing that needs to be rewritten is the basic interface. How you specify vertices, buffers, textures, etc. As a first pass, just to get something out there, they could even write a 3.0 implementation that simply uses 2.1 under the hood. Then, as a later optimization path, they can make it talk to the metal.


For Nvidia cards Opengl3.0 will be an API cleanup, but on ATI/AMD cards there is a little hope to get more than 0.1 FPS in futureHuh? You're going to have to explain that one.

Rob Barris
08-18-2007, 09:42 PM
It would be my hope that no-one actually writes GL3 drivers by layering on top of GL2. You would lose all the potential benefits of the new object model, and I don't see an easy way to implement the new buffer object semantics on top of GL2.

Leadwerks
08-18-2007, 10:14 PM
I think he was saying the driver clean-up would hopefully lend itself towards ATI making better drivers than some of the things they have released in the past. 0.1 FPS IS a framerate I have gotten before when a shader runs in software on ATI every third Thursday of the month every other leap year.

This is a good move, it needs to be done, and it's like they're sort of meeting ATI halfway and saying "Okay, we're going to make your job easier, now please make good drivers from now on".

Hi Rob, I am in Huntington Beach.

dorbie
08-18-2007, 11:02 PM
@ Humus,

It wouldn't have been a multi-post rant if there wasn't an inane argument over the absolutely obvious. Go back to my first post and you'll see it was just a tongue in cheek dig at the situation (lunricant to penetration?), and there's barely anything in there to take substantive issue with (IMHO).

If you're happy with an extended wait for developers to mainstream your features good luck, Vista takes the usual cycle and puts it on the rack. As you know plenty of people inside IHVs take the view I've espoused and it IS complete manipulation of the sitation by everyone's favorite 800 lb gorilla.

As for D3D 10, get back to me in a couple of years when it will be both more relevant and deprecated.

For anyone to come to a thread on OpenGL 3 and boost D3D 10 under these circumstances is a joke, that is all.

Korval
08-18-2007, 11:31 PM
0.1 FPS IS a framerate I have gotten before when a shader runs in software on ATI every third Thursday of the month every other leap year.That's the thing: glslang hasn't gotten simpler. It's still C, it still involves multiple shaders being combined into a program. It still doesn't have very many ways to tell if a shader will run in software or how to guaranteably prevent it on various hardware. And so on.

So, while it's definitely going to help companies like Intel support most of OpenGL better, it's not going to be a great solution for the problems most people are having that stem from glslang. Because that hasn't changed much.


It wouldn't have been a multi-post rant if there wasn't an inane argument over the absolutely obvious.So, you're basically saying that, if nobody disagreed with you, there wouldn't have been an argument?


IMHOPlease remove the H; your opinion has not been stated in a way that is in any way, shape, or form "humble". After all, you consider you opinion to be "obvious", which hardly denotes humility.

dorbie
08-19-2007, 12:14 AM
Your picks are getting more nitty by the post.

Go pester someone else.

dorbie
08-19-2007, 02:05 AM
One of the biggest boosters of Vista has abandoned his support. I'm linking to the /. story that hit today because it has a choice quote to summarize his comments:

http://slashdot.org/articles/07/08/18/1512243.shtml

The guy even blames his previous Vista support on "Something in the water" and details the numerous flaws. This is the editor of PC Magazine and an ex- major Vista supporter talking about the only operating system with support for the latest generation of graphics features via Microsoft's APIs.

It helps explain Vista's sales under-performance, but is only really germane to this discussion because D3D 10 and the associated advanced graphics features have been artificially locked to Vista by Microsoft. Ordinarily I wouldn't bother but amazingly (it seems to me) this has been denied?! :-/

This is a critical issue for graphics APIs and it underscores OpenGL's relevance as an independent cross platform graphics API. These APIs are about exposing and abstracting graphics features, it's at the very core of their purpose.

Here's a direct link to the article where a zealous Vista supporter now hammers the OS, a bitter pill for some and again reminiscent of the Windows Me debacle:

http://www.pcmag.com/article2/0,1895,2171472,00.asp

Roderic (Ingenu)
08-19-2007, 05:01 AM
If there's anyone here from the ARB/KHRONOS, could we have specfications drafts of both Long Peaks and Mount Evans "very soon", please ?

A draft watermarked as such so that noone could confuse it with the final spec would be better than the (interesting) snippets we've had sofar, and I bet lots of people in here would appreciate it.

("very soon" = within 2-3 weeks)

knackered
08-19-2007, 09:53 AM
Originally posted by Korval:
Only because I seriously doubt any of them actually seriously considered making Vista-only games anytime for the next 3-4 years.The upcoming Crysis will be DX10 and therefore Vista only. I know a lot of people who will be upgrading to Vista when that's released.
Anyway...carry on.

Jan
08-19-2007, 10:54 AM
No, Crysis is DX9 and DX10. Only they claim it would look better with D3D10 (which is certainly only artificially done to get some bonus from MS).

That doesn't mean you would not need to at least upgrade your graphics card for Crysis ;)

Jan.

Michael Gold
08-19-2007, 11:45 AM
I'm not going to touch the debate on the quality of Vista or its driver model, except to say this: even if the first release was perfect and the driver model was perfect, it would still take years for the market to migrate. This is the reality of any product upgrade in any industry, and doubly so for operating systems, where customers need to rigorously test the entire suite of applications on which their daily productivity depends. By necessity IT departments are conservative. They live by the motto: if it ain't broke, don't fix it.

OpenGL supports the full capabilities of the latest batch of GPUs via a suite of extensions. In a sign of increased collaboration between ARB member companies, these extensions have been cross-vendor since their initial release which was concurrent with the availability of the hardware. Applications which target such functionality therefore have a platform on which to address the full market of these GPUs.

zed
08-19-2007, 01:45 PM
Personally I'm not bothered much by their decision to tie D3D10 to Vista. D3D10 was supposed to be a clean start anyway, so if they clean start it on the OS side too then that's fair deal to me. My only concern is where DX11 might be heading. If DX11 still don't have any caps bits and don't allow DX10 cards to run on it, that's when I'd start protesting.i remember d3d through the years, it was always like 'the next version would just be a minor upgrade requiring a few slight changes' + then it would come out + it was practically rewritten yet they would spin the line again 'the next version would just be a minor upgrade requiring a few slight changes' repeat ad-finitum. so their trackrecord aint good WRT d3d

i believe what u forecast for dx11 is how it will happen, MS need to persuade users to upgrade OS's, win3x->win9x->winXP were easy choices for the user, tying new versions of dx or internetexplorer (whatever) to the OS are a good move for them.


By necessity IT departments are conservative. They live by the motto: if it ain't broke, don't fix it.it maybe the motto but from what ive seen, IT guys love to tinker, often leading to downtime on working systems (mutter)potential improvements etc(/nutter)


Korval: I'm impressed. You sound like a lawyer.yes well he is an alternative lifeform :)
but seriously i agree both with Korval + Dorbie (i just wish i had eithers literary skills :( )

wizard
08-19-2007, 02:15 PM
Originally posted by zed:
but seriously i agree both with Korval + Dorbie (i just wish i had eithers literary skills :( )You're being very diplomatic zed :)


Originally posted by Michael Gold:
OpenGL supports the full capabilities of the latest batch of GPUs via a suite of extensions. In a sign of increased collaboration between ARB member companies, these extensions have been cross-vendor since their initial release which was concurrent with the availability of the hardware. Applications which target such functionality therefore have a platform on which to address the full market of these GPUs.OGL just has to be made media sexy, that's all :)

Simon Arbon
08-21-2007, 12:56 AM
I have a dual-boot XP/Vista system so just to get some actual figures to compare them i ran 3DMark06 on both, before and after upgrading my RAM.
(Core2Duo, DDR2 ram, with 8800GTX)
Vista64 with 512MB: 6651
Vista64 with 4096MB: 8062
XP32 with 512MB: 8124
XP32 with 4096MB: 8244

3DMark06 uses DX9.

Looking through futuremarks web-site shows a very obvious trend with XP systems way ahead of every Vista system with similar hardware.
The individual tests show XP ahead on frame rate for SM2, SM3 and even CPU rendering.

Our software boxes are going to be saying "Recomended operating system: XP" for the foreseeable future, which is just one of the many reasons we will be sticking with OpenGL.

glDesktop
08-21-2007, 02:05 AM
I know this may sound stupid or crazy.

But if there is one very basic thing I really want this time round is proper cross platform vertical sync.

This has always been a problem with ATI and other cards. NVIDIA cards always provide a solution to this.

But the manufacturer of the card should not really dictate the behaviour with OpenGL, ideally they should all behave the same.

Without vertical sync, the CPU usage goes through the roof and power is drained. A workaround to this is using a short sleep call after every frame, but this can interfere with the smoothness of the frame-rates.

Direct3D has always had excellent vertical sync even with ATI cards, why can't OpenGL?

Yes, I know it is very basic, but I think this needs to be desperately sorted out this time. The future of OpenGL is not just cutting edge graphics.

We need the very, very basics put right!

Simon Arbon
08-21-2007, 03:13 AM
Getting back to OpenGL 3,
my next project needs to support Mt Evans features for customers with the latest & greatest, but i also need it to run on the kids old third-hand computer
(using the fixed function pipeline if thats all it supports).
Pipeline 002 says "OpenGL 3.0 will be implimentable on current and last generation hardware".
Could someone be a little more specific than this, ie. will it be available on everything that has both vertex & fragment shaders?

Also, how will we know if OpenGL 3 is available on the system we are running on and how do we link to its functions?
Do we still need to determine the GL_Version or extension strings and then use GetProcAddress, or will it be something simpler like a separate DLL that we can load instead of Opengl32?

Simon Arbon
08-21-2007, 03:38 AM
glDesktop: But if there is one very basic thing I really want this time round is proper cross platform vertical sync.
I strongly agree, my application is CPU limited and i cant afford to waste cycles sending hundreds of frames a second to a monitor that is only physically capable of displaying 75.
In fact i would prefer to only update the framebuffer at 37.5Hz, showing each frame twice.
Movie film works perfectly well at a 24Hz frame rate (showing each frame at least 3 times to stop flicker).
Massive frame rates look good in magazine reviews but serve no practical purpose.

Of course we still want to keep the rendering pipeline full, perhaps a frame or 2 ahead, but we need the rendering to wait for VSync at each glFinish, and we need to know when we are far enough ahead that the CPU can go away and spend some time processing some complex AI routines.

Overmind
08-21-2007, 03:40 AM
will it be available on everything that has both vertex & fragment shaders?AFAIK, the requirements for OpenGL 3 will be the same as for OpenGL 2, so pretty much anything supporting vertex and fragment shaders will support OpenGL 3.

I'm not sure about NPOT textures, but I expect it'll be the same as now, that is, software emulation when they are not directly supported by the hardware.

So my guess for the minimum is Geforce FX or Radeon 9500. Of course that's only a guess, I don't have any insider information ;)

Lindley
08-21-2007, 04:29 AM
Originally posted by Simon Arbon:
Massive frame rates look good in magazine reviews but serve no practical purpose.

Of course we still want to keep the rendering pipeline full, perhaps a frame or 2 ahead, but we need the rendering to wait for VSync at each glFinish, and we need to know when we are far enough ahead that the CPU can go away and spend some time processing some complex AI routines.Not entirely true. There will definitely be cases where waiting for vsync is a bad idea----GPGPU applications, or any multipass applications, really. A single render does not necessarily correspond to a single frame.

Currently, vsync limitations appear to be handled at a level above OpenGL; maybe something in the windowing system. I'm fine leaving it like that, so long as it's not difficult to control when it should and should not be used.

Jan
08-21-2007, 04:38 AM
The problem is, OpenGL 3.0 will be _implementable_ on all hardware with vertex&fragment shader support.

That does not mean, that vendors will actually do this. I could imagine, that ATI and NV might only implement it for the most recent SM3 hardware and later. Why waste time implementing a new API for "old" hardware, if all games actually using OpenGL will most certainly demand recent hardware anyway (e.g. id Tech5, IF they port it to OpenGL 3.0, at all).

And IF you need to support this old hardware, you CAN use OpenGL 2.x. You could even support both and select the appropriate renderer at run-time.

I hope otherwise, but i don't think that OpenGL 3.0 will be a good choice for commercial apps in the next 2 years, as long as you can't dictate the required hardware for you users.

Jan.

Korval
08-21-2007, 10:27 AM
I expect it'll be the same as now, that is, software emulation when they are not directly supported by the hardware.I wouldn't.

All an IHV has to do is fail to create a image format object for a format that you specify as requiring NPOT texture sizes.

After all, one of the primary goals of the GL 3.0 API is to stop software fallback. Either it works in hardware or it fails.


I could imagine, that ATI and NV might only implement it for the most recent SM3 hardware and later.Why would they do that?

ATi and nVidia have the most to gain from pushing GL 3.0 onto as many platforms as possible. After all, I imagine Blizzard wouldn't be too happy if people with ancient Radeon 9500's or something were unable to play StarCraft 2, even though the hardware could support GL 3.0.

Considering how much apparent effort nVidia and ATi have made in getting GL 3.0 done, it seems rather silly for them to say, "Well, it's only showing up on the higher end hardware."

Besides, the older hardware supports GL 2.0, so why not GL 3.0?

Jan
08-21-2007, 11:16 AM
Well, it's more about resources, than about will. Of course they would LIKE to support OpenGL 3.0 on ALL their hardware. But then the question is, how much effort that takes. And why should you invest in "old" hardware, that you don't sell anymore, especially, when there already is a way to use it (2.x that is).

They want to sell NEW hardware, so "supports OpenGL 3.0" is actually a good PR argument. And, we all know, that StarCraft and all the other games use 2.x, even if it will take 2 more years for it to be released. Blizzard will only switch to 3.0 when it is really well supported.

In the graphics-card market there are really more important things than to support legacy hardware. The only thing we can hope for, is that the "unified driver model" actually makes it easy enough for ATI and NV to bring OpenGL 3.0 to older hardware nevertheless.

Jan.

V-man
08-21-2007, 12:50 PM
Originally posted by Simon Arbon:
Also, how will we know if OpenGL 3 is available on the system we are running on and how do we link to its functions?
Do we still need to determine the GL_Version or extension strings and then use GetProcAddress, or will it be something simpler like a separate DLL that we can load instead of Opengl32? Nothing has been said about this yet or at least I haven't heard.
I can image that it will go through opengl32.dll with wglGetProcAddress. I say this because it seems as if there will be backwards compatibility. I think we'll still be able to use old GL calls with a GL 3.0 context.

PaladinOfKaos
08-21-2007, 02:54 PM
According to the pipeline newsletters, a GL3 context can only perform GL3 calls, but you can create an old-style GL context, and share certain things (FBOs were mentioned specifically, perhaps VBOs as well) with that old context. So you can create a GL3 context, and if you, say, want to do something with a GL2 extension that isn't yet available in GL3, you can do that in your old-style context, and access the data in GL3.

Unless they changed it and I missed something.

knackered
08-21-2007, 03:58 PM
If they're sticking with this idea of supporting a GL2 and GL3 context operating on the same DC, I can see the new GL3 drivers taking a lot longer to write than they should.

Korval
08-21-2007, 05:11 PM
The last info on GL 2-3 interfacing was from here (http://www.opengl.org/pipeline/article/vol003_1/) . Basically, they're thinking that you should be able to render to the screen destination in both. That is, you create a context for each, do some GL 2 rendering, do some GL 3 rendering, etc.

I have no idea how complex or simple it would be to do that in terms of driver logic.

The only actual API-level interoperability they discussed was being able to use texture objects from GL 3 in GL 2. That shouldn't be too difficult in terms of driver logic. And it's an extension (to GL 2.1), so it should happen after they have solid GL 3 drivers.

microwerx
08-22-2007, 01:02 AM
Well, it's more about resources, than about will. Of course they would LIKE to support OpenGL 3.0 on ALL their hardware. But then the question is, how much effort that takes. And why should you invest in "old" hardware, that you don't sell anymore, especially, when there already is a way to use it (2.x that is).
Jan has a point and it flows downstream to software developers. There isn't going to be commercial applications out in the near future using OpenGL 3.0 until there is a decent number of graphics cards/drivers out there that support OpenGL 3.0. It would be like the Unreal 3 engine. We heard lots and saw lots about the engine, but it took several years before there was a production game out using it. It'll be the same with OpenGL 3 because of the development time to ship stable code. This of course doesn't prevent independent developers from adding the feature as a patch to their existing product, but in general, it will take some time before it is out in the mainstream.

We have a different perspective though--we get our hands on the coveted specification as soon as it is hot off the press, so this is a life-changing deal, but not so for the consumer who ultimately buy the products. They have to wait until we figure out how to use the technology and incorporate it in our products.

This is the same with Vista IMHO. I hear lots of skeptics that are worried about jumping on the bandwagon because of the slow adoption. But it's not necessarily because it has huge problems, but because of the way that Vista has fundamentally changed. I think that the changes from XP to Vista are bigger than the changes from 2000 to XP. Users want to make sure that what they have will "just work" and that is the big problem right now.

Personally, after using Vista for a while, there are definitely some advantages to using it, but there is software/hardware that I currently have to wait for/upgrade to before I can really 100% utilize it. Right now, I have two machines--one using Vista and one using XP.

In a sense, XP was to 2000 what OpenGL 2 was to OpenGL 1. Everything you could do before was supported and still worked but lots of new features were there to enhance and make things better. However, there was still some gunk that needed to be redesigned to make it better. This is how I see Vista. Vista will be to XP what OpenGL 3 is to OpenGL 1/2. It'll just take a while before everyone is one the same page.

As far as OpenGL 3 goes, I'm glad about the direction it is heading. There is a lot of things that I've heard so much about. I'm definitely excited about the simplicity of the proposed API. It'll definitely change the way I approach writing OpenGL applications. In fact, I'm going to focus my energy on OpenGL 3 when it comes out (and I have a suitable driver to test out my software with!).

-- microwerx

ZbuffeR
08-22-2007, 04:58 AM
microwerx, I agree with most of you post, but not with "Vista will be to XP what OpenGL 3 is to OpenGL 1/2" :
If you have a DX10 class video card :
- you need Vista to full access to its hardware features with DirectX10
- you have access to DX10-level hardware features with OpenGL 2, whatever your OS. GL 3 would only bring a cleaner API, not really new features if I understood well.

PaladinOfKaos
08-22-2007, 09:18 AM
GL3 will have fewer features than GL2, since you won't be able to use a lot of the new features in the G80/R600. The extensions are based on GL2, after all.

On the other hand, MtE is expected within 5 months, so the wait isn't too bad. I'm actually glad there are three GL3 steps getting to MtE - it'll give me time to get used to doing things in the new API design before I get started on any real GL3.X coding.

Simon Arbon
08-22-2007, 10:14 PM
Jan: IF you need to support this old hardware, you CAN use OpenGL 2.x. You could even support both and select the appropriate renderer at run-time. Yes, i will be writing my rendering engine around the 3.0/MtEvans API (Everything is an object),
i will then have separate modules for each API that will take the output from the engine and send the appropriate OpenGL commands.
For old hardware the 2.x interface will simply ignore any shader requests and generate fixed-function shading instead.
However, if we have shader-capable hardware that is not using 3.0 then i need to write yet another interface that reads extensions and has complex conditional logic to control what it can do (and to prevent software emulation kicking in).
This will add a lot of time to the project just to support a few 'in-between' cards.


Jan: I could imagine, that ATI and NV might only implement it for the most recent SM3 hardware and later. From nvidia documentation it sounds like they have separate OpenGL and DX client-drivers which talk to a common kernal-mode driver which then seems to communicate with a driver that is running on the actual GPU.
Hence they only need to write the one 3.0 driver which should then just work with ANY nvidia card that has compatible hardware.
Ref: Unified driver architecture (http://www.nvidia.com/object/feature_uda.html)

Jan
08-23-2007, 02:54 AM
I didn't say, that it is easy to support older hardware by using 3.0 and 2.x, i just said that it is _possible_ and thus an argument for nVidia and ATI, that they can rub into your face, whenever you complain about missing API support.

I didn't post my opinion how it _should_ be, but how it _could_ be and why.

Hopefully the unified driver architecture is a good enough abstraction, so that we really see 3.0 on all possible hardware.

Jan.

wizard
08-24-2007, 10:00 AM
Are the blend state and depth/stencil operations all packed into the same per-sample operations object? DX10 makes a split here, which I think is a good thing, resulting in less duplication.

knackered
08-24-2007, 12:11 PM
Originally posted by Simon Arbon:
Hence they only need to write the one 3.0 driver which should then just work with ANY nvidia card that has compatible hardware.
Ref: Unified driver architecture (http://www.nvidia.com/object/feature_uda.html) No, from my experience the UDA is just an installer that contains all drivers for all nvidia hardware, which automatically selects the correct driver out of the archive for your hardware. It's just so joe bloggs doesn't have the hassle of figuring out which driver he needs to download....he downloads the whole lot and lets the installer work it out for him.
Nothing to do with what you suggest is happening.

Simon Arbon
08-25-2007, 04:34 AM
NVIDIA TB-00875-001-v1-UDA.pdf (http://www.nvidia.com/object/LO_20030522_4779.html)
With the NVIDIA UDA approach, the NVIDIA driver software focuses on implimenting the driver API functionality instead of tracking hardware differences. All hardware control operations are handled through the class-based object-oriented programming model. Performance-sensitive functionality takes advantage of a hardware implimented HAL (hardware abstaction layer) residing on NVIDIA chips.
Driver functionality in hardware delivers the best possible performance and boosts all applications.
They have been using the UDA since 1998.

dorbie
08-27-2007, 01:22 PM
More on the D3D10 Vista only disaster:

http://www.heise.de/english/newsticker/news/94869

Tough to argue with those kinds of numbers and Valve.

Korval
08-27-2007, 02:42 PM
Tough to argue with those kinds of numbers and Valve.I'm not sure what you're trying to prove here.

All that comment shows is that Gabe Newell thinks it was a mistake for Microsoft to make DX10 Vista-only. Well duh.

However, there's a big difference between mistake and "disaster".

Furthermore, Newell went off on a totally meaningless tangent about how DX10 isn't available on PS3 or 360, despite the fact that neither has the hardware to handle it, and one of them Microsoft has no control over.

By even bringing that up, Newell minimizes his comments. Yes, it'd be great if there was one API that worked on every piece of hardware and every OS. Well there isn't. Tough; deal with it.

dorbie
08-27-2007, 03:03 PM
He called it a "terrible mistake".

I'm not sure what your point is other than to cover your ass after your posting record. Even in the face of damning informed opinion and evidence from Valve you still post trying to spin (and minimize) what the guy said.

What part of "terrible mistake" don't you understand?

Nobody likes the taste of crow pie, but you chose the role of D3D apologist here, it wasn't forced on you, careful you don't get typecast.

Cross platform availability is not a meaningless tangent. Not all relevant factors are under MS control, but they've certainly stuffed up the ones that are, mainly to the detriment of others.

Komat
08-27-2007, 04:00 PM
Originally posted by dorbie:
evidence from ValveI assume that you are talking about the 2.3% of users who have both Vista and DX10 card.

If you look at the rest of the steam statistics you might find that only about 5% of all users which play Valve games has reasonably capable DX10 card (GF8800) so even if the the DX10 would be available on XP, the number of users which would be able to utilize it, would be currently small. Actually there is more users which has Vista (7.9%) than those having capable DX10 card and about 20% of Vista users from the survey has such card.

The number of users having capable DX10 card will increase in the future as will the number of users having the Vista. I think that many people are upgrading by buying a new computer which will come with both DX10 card and Vista so I do not think that the number of capable DX10 cards will increase much faster than number of Vista users.

Komat
08-27-2007, 04:32 PM
Originally posted by dorbie:


Originally posted by Korval:

Furthermore, Newell went off on a totally meaningless tangent about how DX10 isn't available on PS3 or 360, despite the fact that neither has the hardware to handle it, and one of them Microsoft has no control over.
Cross platform availability is not a meaningless tangent. I think that what Korval was talking about is: If is true that the developers will avoid DX10 features because they are not supported on consoles and the developers are aiming on smallest common denominator as the article suggests. Then this reason will not change even if the DX10 would be supported on XP. Especially because you need DX10 hw to use DX10 API so if you need to support older hw, you need to write DX9 rendering path anyway.

However unless the site publishes transcript of the interview I would be skeptical about what Gabe Newell actually said and in what context. From my experience, the news sites are capable of taking things from the context and twisting them based on assumptions of the editor of the article.

dorbie
08-27-2007, 04:55 PM
@Komat, on evidence, yes, but that sidesteps the point, I have said from my first post that the accompanying issue is that Vista only requires DX9 support (not a BAD decision, just a factor), meaning that even those with Vista will not all have DX10 cards and this confirms the anticipated outcome. You can spin it any which way, the outcome is the same and Gabe summarises it nicely I think, "terrible mistake".

If you look at the growth rate Vista may outstrip XP in terms of gamers bums on seats in about 5 years, and only some of those will be running D3D 10 or higher, that is a problem for anyone who cares about graphics features (it's never taken 5 years to transition like this it's completely artificial and caused by Vista). Thank God for OpenGL.

This whole situation is intentionally engineered by Microsoft to drive Vista sales at the expense of graphics, but they have overplayed their hand. If Vista were a roaring success without the documented flaws the same apologist denied before I posted concrete evidence OR if DirectX 10 were available on XP this would be a non issue. Unfortunately there's a convergence of factors here that undermine D3D 10, to the point of making it irrelevant for the time being.

Why support a D3D 10 code path? You'd be better off with a Mac port and you get OpenGL advanced features as an option on XP & Vista as a consequence :-).

dorbie
08-27-2007, 05:00 PM
the news sites are capable of taking things from the context and twisting them based on assumptions of the editor of the article.So I'm left to guess which is doing the greater twisting, the posts reinterpreting the article or the article which unequivocally tells us that Gabe called D3D10 being tied to Vista a "terrible mistake", with stats to back the claim up.

Maybe he said it was a "great idea" as he presented evidence that only 1 in 50 gamers are capable of running D3D 10.

Korval
08-27-2007, 10:01 PM
Nobody likes the taste of crow pie, but you chose the role of D3D apologist here, it wasn't forced on you, careful you don't get typecast.Apologist? What conversation were you taking part in?

See, I was taking part in one where I was taking the stand that your assertion of hyperbolic nonsense like, "Microsoft is sodomizing IHVs" or "D3D 10 being limited to Vista is a disaster" or other such statements were, in fact, hyperbolic nonsense.

I never thought that limiting the D3D 10 API to Vista was a good idea. What I take issue with is your rabidly anti-Microsoft stand that says it's a horrific state of affairs that's destroying an industry.

Opposition of an assertion or position is not support of the opposite.


So I'm left to guess which is doing the greater twisting, the posts reinterpreting the article or the article which unequivocally tells us that Gabe called D3D10 being tied to Vista a "terrible mistake", with stats to back the claim up.First, the article did not quote Newell as saying "terrible mistake". If it were a quotation, it would have been in quotes, which it clearly wasn't.

Second, the stats you cite do not suggest "terrible mistake". Objectively, they suggest, "Uptake of Vista+ a D3D10 card is rather small among gamers at the present time." These are two entirely different things. One is a fact, the other is a hyperbolic opinion based more on hatred of Microsoft than any facts in evidence.


Maybe he said it was a "great idea" as he presented evidence that only 1 in 50 gamers are capable of running D3D 10.How many gamers are capable of running D3D 10 isn't the issue.

If you want evidence to support your position, what you need is evidence that a large portion of gamers could have D3D 10 but can't because they run WinXP.

The number that have D3D 10 and Vista is meaningless because the size of the number could just as easily be the fact that the G80 and R600 line of hardware is crap.

Outside of the GeForce 8800 line of cards, there is not one reason to buy a GeForce 8xxx or an ATi 2xxx card. Even in D3D 10 benchmarks, these cards offer incredibly poor performance. So regardless of whether someone has Vista or not, or whether D3D 10 features were supported in XP or not, there's no compelling reason to buy these over last generation's hardware.

dorbie
08-28-2007, 02:12 AM
You're clutching at straws.

The fact is that the substance of my original post which was rather obvious even then, and that you've been attacking ad nauseum has been vindicated by the public information presented, both by the PC Mag editor and by Gabe.

Your fig leaf argument that the cards suck at D3D 10 is flimsy. Suck compared to what?

Try sending micropolygons down the pipe and get back to me on relative performance.

Jan
08-28-2007, 03:56 AM
Opposition of an assertion or position is not support of the opposite.
<offtopic>

Lol. Some presidents beg to differ.

</offtopic>

Jan.

elFarto
08-28-2007, 03:58 AM
Originally posted by Korval:
If you want evidence to support your position, what you need is evidence that a large portion of gamers could have D3D 10 but can't because they run WinXP.According to the Valve survey, 61,180 have an NVIDIA 8800 or 8600 card, and only 21,492 of them have Vista, that's about 65% of people have the hardware, but no ability to use it (via DirectX at least).

(I've ignored ATI DX10 cards from those numbers as they don't appear in the video card list, I assume they're under 'Other').

Regards
elFarto

Michael Gold
08-28-2007, 07:37 AM
Can we lock this thread now?

Humus
08-28-2007, 08:45 AM
That would be a good idea. This thread is clearly not about OpenGL 3 anymore.

knackered
08-28-2007, 01:27 PM
I think open and frank discussion is healthy. Just lock the thread in your own mind, and don't revisit it.

dorbie
08-28-2007, 02:50 PM
@knackered, I tend to agree but when things degenerate to blatant misquotes like claims I said "Microsoft is sodomizing IHVs" (it's not even vaguely comedic) even as legitimate quotes from the likes of Gabe are picked apart maybe it's time to stick a fork in it.

The viability of D3D 10 as an API has direct bearing on OpenGL 3.0's (etc.) future success as an API. Anyone who doesn't see that just doesn't get it. Gabe's info has direct bearing on this (as did the PC Mag Ed.). This is not just about engineers writing drivers.

It's genuinely saddening the reaction my comments provoked, but it takes two to tango and I'm not one to walk away from a blunt exchange.

Brolingstanz
08-28-2007, 03:31 PM
Gentlemen, let's try to focus on the positive, shall we?

0:-)

Jan
08-28-2007, 04:34 PM
Yeah, lock it already, dorbie.