poll - visit the home page

I think some people dont often visit the main page, so Im posting this. There is a nice poll and you can voice your opinion.

I guess most people have bookmarked the forums so it’s easy to forget checking the front page for updates but I usually check it once a day. I’ve got to say this is a funny poll though.

Regarding the poll:
I’d prefer the extension method. I don’t really like the idea of checking the GL version, as it leads to assumptions. I hate assumptions

Extension is necessary for the beginning, but actualy from my point of view HLSL is somehow ‘a miss’. The Cg is out for a while, yes, its a bit unopptimized, but it works with DX+OGL, and I don’t see why I should jump to HLSL.

I saw the poll, but I don’t have enough information.

In general I’m of the opinion that an ARB extension is no more mutable than something in the spec, some of the more confusing problems in OpenGL have been caused by extensions that have changed when moved to the core spec, made worse by deliberate attempts by some vendors to keep conformance tests slack, and implementors who sometimes don’t fully appreciate the differences of motivations behind the changes.

ARB -> core at some point could be nasty IMHO if the spec changes in the process unless the changes are significant and you end up with something clearly differentiated.

On the other hand is this a political attempt to put something in the core spec prematurely and force NVIDIA to support something they may want to remain optional since they’re focusing on Cg? Will it simply mean that NVIDIA is non compliant and will therefore drop support for GL2 - OpenGL 1.4 or whatever it will be called?

I don’t think it’s right to force something as significant as HLSL down any vendors throat when they have an alternative strategy and technical direction, even if I’m disappointed at the lack of uniformity.

I think in the end it should probably be an ARB extension simply because it should remain optional for now, not just because of potential future changes.

NVIDIA’s argument that a compiler should be a separate library (like glu) have a lot of merit and they’ve proven that it is technically feasible at least in one form. It would also make it easier to get broad compliance with a ‘standard implementation’ where a compiler could be used on anyone’s implementation while leaving the door open for vendors to improve on the baseline compiler technology or replace it. I haven’t heard the counter argument to this and glslang in the core spec closes the door on this, but it depends on the particulars of the final spec.

It’s one thing to say vendors have enough experience with shaders now, but it’s pretty meaningless when those experienced vendors have fundamental disagreements on technical issues.

[This message has been edited by dorbie (edited 05-09-2003).]

I wonder how windoze programmers can access this kind of programmability if it is put directly in the core… I guess M$ won’t update it for some obvious reasons.
So, I fear there’s really no choice.

Now, I could effectively check the GL version and assume (i don’t like the idea but I’ll do it if I need to), the problem is that I am unable to figure out how to fetch the extension pointers w/o having the ext spec at hand. The problem is even more tragic when it comes to #define the new tokens since their hex values will never be included in the GL spec (at least, not the one publicly avaiable)…
Sure, vendors may just ship an updated glext and leave everything here, it would work fine but I probably wouldn’t like it.

Is it really possible to put new things in the core without passing for at least an extension? Can someone give me some examples?

I also taken in account the idea to put it directly in the core. Maybe it would be easier for vendors, maybe it would be just better, don’t know. What I really fear is the problem described above.

There will be a day in which I will code on *nix. There will be a day most PCs will run linux and mostly open source sw, but I fear most people will have win on their PCs for a long time right now… Unluckly for us.

I think it will be an extension whichever way they decide to go, just for practical reasons. The poll question could be paraphrased, should HLSL be mandatory now or mandatory later? I personally like mandatory support sooner rather than later so that games can rely on HLSL sooner rather than later.

Originally posted by Coriolis:
I think it will be an extension whichever way they decide to go, just for practical reasons. The poll question could be paraphrased, should HLSL be mandatory now or mandatory later? I personally like mandatory support sooner rather than later so that games can rely on HLSL sooner rather than later.

How many vendors actually support OpenGL 1.4 today?

(There’s no convenient way to check this on Tom Nuydens’ page. )

Cass

Originally posted by dorbie:
I saw the poll, but I don’t have enough information.

Ditto.


In general I’m of the opinion that an ARB extension is no more mutable than something in the spec…

I agree. I usually take it to mean “this extension will be in the core the next time we decide to rev the version number”.


ARB -> core at some point could be nasty IMHO if the spec changes in the process unless the changes are significant and you end up with something clearly differentiated.

I think I’d rather see it as EXT if it is going to be subject to drastic change. Even though that isn’t the usual terminology used.


I think in the end it should probably be an ARB extension simply because it should remain optional for now, not just because of potential future changes.

Maybe it should be in the 1.5 core. I definitely don’t think we’re 2.0 ready though. Maybe we should wait a couple hardare generations for 2.0 to see which way hardware evolves. Then throw out all the crud that is no longer required.


NVIDIA’s argument that a compiler should be a separate library (like glu) have a lot of merit and they’ve proven that it is technically feasible at least in one form.

They’ve also proved with their emulation driver that GL2 type functionality in the driver is possible. Which may be an issue for Matrox, S3, etc that don’t have hardware that can implement vertex and pixel programs.


It would also make it easier to get broad compliance with a ‘standard implementation’ where a compiler could be used on anyone’s implementation while leaving the door open for vendors to improve on the baseline compiler technology or replace it. I haven’t heard the counter argument to this and glslang in the core spec closes the door on this, but it depends on the particulars of the final spec.

I’ve been thinking about this more since the last discussion on this board.

I am of the opinion that the driver and the compiler should be seperate. The driver should implement an ABI.

This way I have a choice of compilers. I can see Intel developing better compilers than NVIDIA. And NVIDIA developing better drivers than Intel. That’s what each company’s strengths are. And both win in the end.


It’s one thing to say vendors have enough experience with shaders now, but it’s pretty meaningless when those experienced vendors have fundamental disagreements on technical issues.

Agreed. IMHO, a shading language is a shading language is a shading language. It’s the glue between the language and the metal that is important. There needs to be a consensus between all involved parties. The interface needs to be stable and extensible.

One of the reasons I posted this was so that people would see the numbers themselves, and also influence them with a vote.

60% beleive it should be in the core, but this would not be tradional.
Everything ends up beeing an extensions, then 1 or 2 versions later, becomes part of the core.

But I dont see why some people want to keep it out of the core forever. GL should be do whatever D3D will be doing first.

In any case, whether its in the drivers or an external library, as long as it’s stable, who cares?

Ei Vrej,can you contact me?I wanna discuss something with ya.10x

V-man, you are correct, I too would like to see it in the core ASAP. The problem is that the two most important vendors have not agreed on what should go in. One side may be able to vote it through in the ARB, but that isn’t the same as the big two agreeing on a single direction.

I’m not taking sides here, both seem intransigent, but frankly I don’t care if the world and it’s dog is siding with ATI, it’s NVIDIA’s agreement that counts right now IMHO. Their market share, installed base and control over their driver platform gives them that position. The irony is that one of the cornerstones of NVIDIA’s technical objections is the very thing that prevents the ARB or anyone except NVIDIA providing ARB shader support on NVIDIA cards. OTOH NVIDIA can supply shader support on ATI cards etc., but it’s proprietary and unsupported by other card vendors.

I doubt it would be a good thing for the ARB to vote this into the core with NVIDIA voting against it for example, but I don’t know if that would happen or what the result would be.

It’s the OPTIONAL nature of an ARB extension that is the key here, and that optionality is the only reason ARB extensions were devised. ARB extensions are in the core spec, unlike EXT extensions, but they are optional w.r.t. claiming you support the applicable revision of OpenGL. My concern is that there are companies and individuals who hope to force NVIDIA to follow a strategy against their will by effectively threatening to block their ability to claim OpenGL support if they don’t fall into line.

Maybe it’s a good thing, maybe not, I can’t predict the outcome.

[This message has been edited by dorbie (edited 05-10-2003).]

I’m happy either way, though I would prefer to have an extension string to query to be able to get a good (or half-decent) guess on whether its going to run in hardware.

Originally posted by Humus:
I’m happy either way, though I would prefer to have an extension string to query to be able to get a good (or half-decent) guess on whether its going to run in hardware.

That’s what I voted on. I much rather have an extension.

These motivations are interesting.

If this HLSL winds up running in software on NVIDIA even when they have ARB vertex and ARB fragment programmability it would vindicate what they’ve been saying about compilers in the driver. Then again it would be their lack of support that did this. I’d be surprised if it were this bad, I mean it has to compile to something and the baseline capability should be there for existing ARB assembly. Something like an entirely software implementation would look like blatant sabbotage to me.

Can’t we just lock NVIDIA & ATI in a room and don’t let them out until they agree? :slight_smile:

Contriversial suggestion: keep others the heck out of the room to avoid the delusion that there’s a meaningful majority on either side.

Originally posted by dorbie:
These motivations are interesting.
Contriversial suggestion: keep others the heck out of the room to avoid the delusion that there’s a meaningful majority on either side.

LOL. You certainly said what was in the back of my mind for a long time.

Probably 3Dlabs should be in as well, just to make sure one side wins.

I don’t underestimate 3DLabs’ core contribution, but 3DLabs in the room leaves you with a 2v1 vote for glslang, (no different from the ARB situation right now really) and what we need is an agreement, not a majority decision. Sure interested and informed parties should have their say, but when it comes to the actual decision we need ATI & NVIDIA as the dominant card providers to work this out.

Agreement on this is way more important than some of the finer points of which API or even some of the bigger engineering issues.

The overriding issue is a first class, supported, uniform, non proprietary API everywhere. Engineers with other priorities are IMHO missing the forrest for the trees.

Ah well, I can dream. If only ARB stood for arbitration.

[This message has been edited by dorbie (edited 05-12-2003).]