ARB June Meeting

Well, the June meeting notes are up. Comments:

NV_occlusion_querry and NV_point_sprite get into the core, straight from NV extensions. Cool.

VBO’s remain an extension. Also fine. It’s best to take it slow with such a fundamental API, let alone performance, issue; you don’t want something in the core that you’ll have to replace because you missed something in the design.

What I want to know is what is, “ARB_texture_non_power_of_two”? Can hardware even implement it?

Also, they had a quick line about the super-buffers WG, but they didn’t say whether or not the extension is coming together. Obviously, it’s not ready yet, but is it close?

What I want to know is what is, “ARB_texture_non_power_of_two”? Can hardware even implement it?

Probably similar to NV_texture_rectangle.

I’m quite curious about the HLSL they put in OpenGL 1.5 now. I’m guessing it’s the same as the OpenGL 2.0 glslang. I just hope I can use it on the GeForce 3 and 4 TI level of hardware by perhaps setting some profile similar to Cg. If not well I’ll still continue to use Cg.

-SirKnight

VBO’s remain an extension

That wasn’t my impression of it…

ARB_vertex_buffer_object

ISVs are using this and seem happy with it. Only negative feedback was from someone using thousands of tiny VBOs and unhappy with performance implications on some platforms.

VOTE for inclusion in the core: 10 Yes / 0 Abstain / 0 No, PASSES unanimously.

What I want to know is what is, “ARB_texture_non_power_of_two”? Can hardware even implement it?

It’s basically a texture with non-power of 2 dimensions. The extensions is actually a group of 4 mini-extensions (each can be separately supported or not) for non-power-of-2 1D, 2D, 3D and "cube"map textures.

Edit: typo

[This message has been edited by al_bob (edited 07-25-2003).]

nothing new about superbuffers/uberbuffers oh i’m just waiting for those…

VOTE for inclusion in the core: 10 Yes / 0 Abstain / 0 No, PASSES unanimously.

Well, I guess it’s unanimous: I’m an idiot I have no idea what I was even looking at that made me think they hadn’t moved VBO into the core.

Though I still think they should probably wait a few more months. Just to be safe.

The extensions is actually a group of 4 mini-extensions (each can be separately supported or not) for non-power-of-2 1D, 2D, 3D and "cube"map textures.

Well, yes, but NV_texture_rectangle defines a special type of texture. It specifies that texture coordinates cannot be normalized on [0, 1], but have to be from [0, w] and [0, h]. Also, since texture rectangles is separate from cubemaps and other textures, you can’t have a cubemap that is non-power of 2. Also, texture rectangles can’t have mipmaps.

That’s why I asked if this was implementable in hardware. It’s clear that the NV_texture_rectangle extension defines behavior that nVidia hardware seems to like. After all, if those restrictions weren’t necessary, nVidia wouldn’t have placed them there. So, can this extension be implemented in modern hardware?

It’s easy enough to write a spec for texturing that transparently supports non-power-of-two textures. But, does such hardware exist? Or is it going to be another of OpenGL’s wonderful games of, “Find the set of state that works on your hardware of choice.”

The ARB_texture_non_power_of_two extension basically just relaxes the “width and height must be power’s of 2” restriction for all texture targets. Texture coordinates are still in the [0…1] range and mipmapping is still supported.

No current hardware (that I know of) supports this extension, its meant for future hardware.

I’d like to know who voted against the shading language as an ARB extension. Depending on the answer it might cause me some concern.

Originally posted by dorbie:
I’d like to know who voted against the shading language as an ARB extension. Depending on the answer it might cause me some concern.

I apologize if I’m wrong but I have a hunch it was nvidia. Simply because of one thing. Cg. They want every one to use Cg (I do and I think it’s great btw). So having some HLSL ratified into an ARB extension that is different than Cg, this I can see, they would not like. Than again maybe I’m totally wrong, but this would be my first guess.

-SirKnight

No current hardware (that I know of) supports this extension, its meant for future hardware.

So, what you’re saying is that the ARB wasted some time that could have been spent on the uber-buffer’s extension?

If somebody writes an extension spec, but it is not implemented, then they have wasted their time. They should have just promoted EXT_texture_rectangle to ARB status, and when the new non-power-of-2 stuff is actually avaliable, then release the extension spec (along with appropriate drivers that implement it).

I’d like to know who voted against the shading language as an ARB extension. Depending on the answer it might cause me some concern.

Oh, come on. Didn’t you read some of the issues (and who brought them up)? It’s obvious who voted against glslang.

Granted, I agree with some of their reasons for it, though. I seriously doubt it is an attempt to do anything underhanded.

This, coupled with the non-power-of-2 thing, shows a shift of the ARB back towards what makes OpenGL bad: not knowing the right API to use.

Should I really need a document somewhere to tell me, “Yes, I know glslang says that feature X exists, but never use it because nobody implements it at anything anyone would reasonably call ‘fast’.” Such things are a major impediment towards the growth of the API’s usage.

Sure, it’s nice to see “texture” access facilities in vertex shaders (though I would have used a very different kind of API, one that is quite distinct from textures, since uploading modifications is more likely on vertex shaders than regulat textures in fragment shaders), but there’s no reason to require it at a point where no hardware can use it. If hardware can’t use it, it is a mis-feature; that is, a feature that exists technically, but not in any useable fashion. The same goes for this non-power-of-2 extension; there’s no reason to even have the spec if nobody can use it.

Meanwhile, we’re still waiting on the super-buffers extension, which is set to provide real, useful power and functionality that the API really needs.

It’s this kind of backwards thinking that allows an API like Direct3D to become more efficient; at least it doesn’y have 1001 functions that you should never use, but have to be there because some moron on the ARB wanted them.

[This message has been edited by Korval (edited 07-25-2003).]

If somebody writes an extension spec, but it is not implemented, then they have wasted their time.

Extensions aren’t only written because current hardware can expose the functionality. It’s nice ot be forward-looking every now and then.

It’s clear that the NV_texture_rectangle extension defines behavior that nVidia hardware seems to like. After all, if those restrictions weren’t necessary, nVidia wouldn’t have placed them there.

Looking at the date at the top of the extension spec, NV_texture_rectangle was first drafted in 2000. In that time frame, the GeForce 2 was released. Needless to say, hardware has improved a bit since.

Meanwhile, we’re still waiting on the super-buffers extension, which is set to provide real, useful power and functionality that the API really needs.

Better to have a good ARB extension as opposed to a set of not-so-good vendor-specific extensions, no? I’d rather they take their time and get it right than having to have 3 versions of the same extension. Besides, if the extension is an ARB one, there’s a good chance it gets promoted to the core, which is why it’s all that more important to get it right the first time.

Originally posted by jra101:
No current hardware (that I know of) supports this extension, its meant for future hardware.

Doesn’t D3D allow non-Power of 2 textures with mipmapping?

Originally posted by Korval:
NV_point_sprite get into the core, straight from NV extensions. Cool.

That’s not what I read, they say it’ll be modified before entering the core.

SirKnight, I can speculate as well as the next guy, but I’d like to know.

dorbie, whatever the answer actually is, why would you be concerned?

-Mezz

PS: If you don’t want to say then I apologise for asking.

To me it really doesn’t matter who it was because it’s not like we HAVE to use it. There is always Cg and the other HLSLs out there. Actually I’m quite happy with the way Cg is right now. Except for the fp20 profile. It doesn’t optimize good enough. I sent an email to nvidia showing this though I never heard back. What it will do is use a general combiner for some operation when it didn’t need to. For example it could perform this op in the final combiner yet it chooses against this. But other than that Cg fine, and you can use it on any other card as long as the card supports ARB_vp and/or ARB_fp. Anyway, how ever this new HLSL is like in OpenGL 1.5, I’m sure it’s pretty good. I’m hoping it’s the same as the OpenGL 2.0 glslang. Maybe someone could come on here and talk about it who knows for sure?

-SirKnight

Ahhh SirKnight, you’re warming the cockles of NVIDIA’s heart.

Of course it matters, this goes to the heart of shader compilation and optimization for different targets.

If NVIDIA voted no it wouldn’t bode well for ARBslang support from them, it becomes optional because of the two related votes in that meeting. We may end up with Cg vs ARBslang (we’re already there but only by default).

Incase you missed the recent debacle w.r.t. cheats & optimization of shaders, this stuff is important and the control of that legitimate compiler optimization requires that manufacturers are active in developing the compiler you are using.

We’ll could end up in the middle of a shader ‘war’ and that’s not good for developers.

Of course we still don’t know where the no vote came from, or what NVIDIA ARBslang support will look like either way.

SirKnight, this is the glslang of OpenGL 2.0, although it may change by then.

The issue of 1.5 vs 2.0 is misleading, 1.5 has many 2.0 features. Major features of 2.0 have been included in 1.5 as an extension and this was the intent of some people trying to head off a monolithic 2.0 introduction that would have included breaking comaptability.

There was another vote on glslang in that meeting that kept glslang out of the 1.5 core. The hope was that this would make it into 1.5 core. Holding out the mythical GL2 as a carrot for glslang core inclusion is a joke. GL2 is a name, a concept and 1.5 has some of it’s features, because of this vote it has one less feature although it’s there as an ARB extension, the distinction will ultimately depend on whether it’s supported where you need it.

The hope was that this would make it into 1.5 core.

Why? The language isn’t even in real use yet. As long as it is an extension, there is the opportunity to change it if it turns out that it doesn’t provide the functionality in a reasonable way (I’m still uncertain as to how the whole shader-linking thing works out, and I really don’t like the idea of ‘texture’ accesses in vertex shaders). Once it makes it into the core, you have to live with it, no matter what. Changing core features is a different matter from changing extensions.

Features should be extensions before becoming core so that you can make sure that they serve everyone’s needs correctly.

[This message has been edited by Korval (edited 07-26-2003).]

Dorbie, I think you misunderstood what I said. I said it didn’t matter to me who that one company was who voted no. Reason is because it passed so now this extension is in the core and all hardware companies have to support it now, even the company who said no. If not then we can say screw you and not support them, we all know they don’t want that. And the way things are right now we don’t have to use it if it’s not all that great. But since it’s the same as OpenGL 2.0’s glslang then I’m quite sure I’ll like it.

Now I don’t know the details about why, according to the notes, NVIDIA said they would vote no if some things stayed the way they were, but if it has anything to do with having a HLSL “built in” so to speak into the OpenGL core then I would have to agree with them. I don’t think it makes much sense to have a HLSL built into OpenGL. There should only be assembly language like shading extensions (like ARB_vp and ARB_fp) in the core and the HLSL should be “outside” like how Cg is. To me this seems like an obvious thing to do but some don’t see it that way. OpenGL should be kept a “low level” graphics API and anything else you need, any kind of helpers like HLSLs, should be just like utilities outside the api and compile to what is in the core. I’m no expert on Direct3D but from what I have seen, that’s pretty much how D3D is. All of these extra things are a part of the D3DX library of helpers. Having a HLSL built in the core makes about as much sense as having C++ built into our CPUs. No, what we have is an assembly language defined for our processors which has a 1:1 mapping to it’s machine code instructions which is what the CPU understands, it doesn’t know wtf “mov” is, but it does know what a number is and what to do with it. And it turns out luckily that there is a number that corresponds with the “mov.” Well most of these instructions have this mapping anyway afaik. Then we have all of these high level languages that compile down to this assembly language then to these machine code instructions for our CPU so the program can run. The way HLSLs are right now, ie Cg, is how it should be. You have an assembly language defined for the GPUs which has a 1:1 mapping (or close enough for it to work anyway) with the GPUs machine code, just like with our CPUs, and our HLSL like Cg or whatever will compile to this assembly language then from there that get’s turned into the GPUs machine code to be executed. Of course we need a standard assembly language defined that all graphics hardware will work with, ie ARB_vp. Which it turns out we do so it’s all good.

Also, what Korval said is 100% correct. All of these new features should be out and in use for a while, 6mo to a year I’m guessing, to get all the issues ironed out and to a point where everyone, or mostly everyone, is happy with it. THEN only should it become part of the core. How things have been going in the ARB just doesn’t make sence. What are these guys thinking? Don’t make something a part of the core before us developers are using it for a while to allow these potenetial core features to be “broken in” if you will.

-SirKnight

[This message has been edited by SirKnight (edited 07-26-2003).]