Radeon and ARB_vertex_program

OK, I’ve got my shiny new Radeon 9500 Pro, with all kinds of nifty shaders.

If I were to use Direct3D, I would find that the 9500 Pro would support looping in the vertex shader. However, I’m an OpenGL user. So, explain why this functionality mysteriously vanishes under GL?

Like nVidia for making nVidia-specific extensions or hate them, at least they bother to expose the vast majority of their card’s functionality to us. Is it really that hard to just add a few opcodes to the ARB_vertex_program spec? Or is the philosophy of “design-by-committee” to blame for the delay yet again.

I can understand not having this functionality when ARB_vertex_program shipped. But it’s been 6 months now. The R300 has been out for 6 months too. There’s no excuse for us not having access to this functionality.

My impression is that there is an OpenGL working group that is designing ARB_vertex_program2, or whatever the next version of ARB_vertex_program is. This revision will expose the branch and jump functionality of DX9-class hardware.

This is similar to how NVIDIA released a separate NV_fragment_program extension, but ATI did not. Instead, ATI waited for ARB_fragment_program.

There are pros and cons of both approaches, naturally.

Eric

There’s also the possibility that OpenGL programmers will have to wait 'till OpenGL 2.0 arrives before such functionality becomes available to them.

My impression is that there is an OpenGL working group that is designing ARB_vertex_program2, or whatever the next version of ARB_vertex_program is. This revision will expose the branch and jump functionality of DX9-class hardware.

That much, I assumed. However, it doesn’t answer my question of why this hasn’t been done until now? Why is it that ARB_vertex_program_2 (or even an ATI_vertex_program_2 that uses the same binding interface as vp1) wasn’t avaliable earlier? What is so bad about actually giving us this functionality, regardless of the method of doing so?

There are pros and cons of both approaches, naturally.

The only real ‘con’ of nVidia’s method is that it has the problem of bloating the number of extensions. I would much rather have the functionality I [b]bought[/i] than having to wait on some silly comittee that doesn’t understand that time is important to come up with a suitable lowest-common-denominator (or to avoid Microsoft IP issues) for the looping vertex shaders.

Is this how GL1.x is going to progress? Every card release that singificantly improves functionality is going to cause a 1-year wait before that functionality can be used in OpenGL programs? That’s rediculous; you don’t see Microsoft doing anything like that with D3D.

No but you do see Microsoft leaving a year or 2 between DX releases anywho, so what difference does it make?

Remember kids, D3D didnt have a stencil buffer for a LONG time after OGL had it. A lot of the time, new features are accessible under OpenGL long before theyre accessible under D3D, due to OpenGLs extension mechanism.

Well, ATI has an ATIX concept where they’ll develop experimental extensions which may simply go away in the next driver revision. They didn’t use this for looping and conditional features though…

Any time you write an extension, it consumes resources. You have to specify it, code it, and test it. Writing the specification in an intelligent way that addresses all the issues of interacting with the rest of the OpenGL system is not easy. Doing all this for throw-away code is a bit wasteful.

If you aren’t writing throw-away functionality, then you also have to add ‘code maintenance for the lifetime of the product’ to resource consumption.

The fun bit is that, even though you went to all this effort, your extension will only see limited use because some people prefer industry-wide standards. And regardless of whether the software developers support your specialized extension or not, any /good/ feature will be standardized by the industry…

So if the feature is valuable, you’ll have to rewrite support for it. If it’s not valuable, why did you bother?

On the ISV side, you have a slew of new extensions coming out. There’s not much point in your product supporting VEND_SPIFFY1 when VEND_SPIFFY2 is coming out in two months time.

Then again, if you software drives the feature set, before it’s in hardware you get problems with the spec not relefcting the way the actual hardware winds up needing to work. You get lots of cards only supporting a feature in software, so it’s too slow to be of general interest. You have a specification without a user base… wasted effort.

In each of these situations, the effort wasted COULD have been spent on coming up with an industry-standard specification with broad IHV and ISV support. One that could be rolled into the core in a future release. One with longevity.

Then again, when you do that, you get complaints about high-end features not being exposed quickly enough 8P It’s a no-win situation, eh?

Have fun,
– Jeff

Korval, I agree that NVIDIA’s choice to expose their mechanisms of the hardware through extensions like NV_fragment_program and NV_vertex_program2 makes it easier to get access to hardware features earlier.

The drawback is not the extension proliferation, but the headache of writing portable OpenGL code. In his QuakeCon 2002 keynote, John Carmack discussed how the Doom 3 engine supports 5 different rendering paths depending on the available features of the hardware. This is a pain. In recent .plan files, he also discusses some of the more subtle issues between ATI and NVIDIA programmable features.

The purpose of the ARB extensions is to help standardize on common, useful features in a way that will help developers write portable code. To realize this goal, working groups consisting of many different parties are formed to resolve tons of issues. All of this takes a lot of time, discussion, and revision to ensure a clean API.

I would argue in general that while adding new features to the hardware (e.g. programmability, floating-point fragment pipelines) is challenging, it remains a well-defined engineering task. However, exposing these features to the programmers in a well-thought-out software API is just as hard, if not harder.

Eric

Eric

OpenGL ARB extensions (and core) will be better from the experience with vendor (NV/ATI, even EXT) extensions. It’s usually not obvious what the best API will be, and the successes and failures of vendor extensions are the best lessons to show us what a good, cross-platform API should look like.

I agree it’s something of a trade off in terms of resources, but vendor extensions are a good thing for OpenGL, IMO.

Cass