PDA

View Full Version : 3dLabs OpenGL 2.0 Specs!!!



no-one
09-24-2001, 08:16 PM
this could be an excellent idea.

you'll need a .PDF reader. http://www.3dlabs.com/opengl/ogl2.pdf

so what do you think about this?

BTW: i posted this on the beginner forum so as anyone could reply...

[This message has been edited by no-one (edited 09-24-2001).]

zed
09-24-2001, 09:50 PM
personally it doesnt look like they extended it far enuf to me, i hope they dont rush it and cock it up

ffish
09-24-2001, 10:25 PM
I like the looks of it. Especially the C-based shader language pseudocode. Imagine real-time RenderMan style stuff on consumer hardware! I'm picturing a simple framework like current OpenGL programs but where all of the work is done in strings of shader language programs. Or, even better, compilers for these shader languages that do it all for you, you just write the shader program.

I don't like the idea of a huge SDK, DX style but maybe that's what we need. Like zed, I hope they don't rush it, but on the other hand, it's long overdue and if something like this isn't done soon, it may be too late. OpenGL 2.0 may be the kick in the teeth to DirectX that we all hope for.

BTW, what sort of further work would you like to see, zed?

harsman
09-24-2001, 11:02 PM
I like it. I also agree with them that it would be nice to have something to work towards rather than extending OpenGL with vendor specific stuff ad nauseam. I also think better memory management and more fine grained synchronisation (like NV_fence) is desperately needed. And a standard fragment shading language would be really cool, although it might not be feasible right now, since the hw is so diverse. For vertex shading, it probably is though, a standard vertex shader extension would be nice. I would guess that the driver writers don't really like the timing parts of the memory management though, having to report semi accurate timing estimates will probably be a pain for them.

stefan
09-25-2001, 03:09 AM
I'm not sure if hardware independence is really what most HW vendors would like to have. If you look at the cards available now you'll see that there are quite a number of features unique to each card (e.g. HiLo Textures for GeForce3, matrox has this strange head casting thing).
Will there be a time when we'll reach the end of new features and we'll just see faster cards? If not, the suggested API will have to be extended and once again we'll have lost HW independence.

rIO
09-25-2001, 07:28 AM
Is it surely true that we will NEVER have a full software implementation of the hardware capabilities of all the cards around, but we must admit that a base implementation of 10 years ago, fullfilled with card vendors specific extension, is a damn pain!

Let's say ok to implement (all maybe) nowadays hardware functions in a transparent way, it's a pain to check if a card supports NV_ or SGI_ or WHATEVER_ extension.


rIO.sK

barthold
09-25-2001, 11:19 AM
A hardware independent shading and fragment language, like we are proposing, will obsolete the need for a lot of extensions. It'll be up to the application developer to use the power that this gives you creatively, and use the hardware in ways we (IHVs) cannot possibly dream of.

Yes, there will always be a need for some vendor specific extensions, and that is good. In fact, that used to be the power of OpenGL, that it is extensible. New hardware features can quickly be added to the API in a consistent manner. Unfortunately, today this has gone overboard, there are too many extensions that are not cross-platform and cross-IHV. OpenGL 2.0 will drastically reduce the need for a lot of vendor specific extensions. If the ARB then keeps up and promotes cool extensions into OpenGL 2.1, 2.2 etc, at a reasonable pace, it'll continue to be a great API.

Barthold

Zengar
09-26-2001, 03:34 AM
I liked that. Not bad, not bad. Only what I don't like so much is this shader language.
I think, it's too high-level. Won't it loose a lot of PFS? And as for me DX8 assembler is more logic. For example, I didn't understood ehow 3dlabs Frameprocessor should work. In sample program they are importing data from vertex shader. But vertex shader is working at vertexes, for example 3 point - triangle. So, only 3 points will be passed to frameprocessor? Will it create gourand shaded imege, huh?

marcus256
09-26-2001, 05:11 AM
Originally posted by Zengar:
Only what I don't like so much is this shader language.
I think, it's too high-level. Won't it loose a lot of PFS? And as for me DX8 assembler is more logic.


Not many ppl realise this, but the higher level the API/language, the better are the possibilities for an IHV to optimize it for their underlying harware. An example (not very realistic, but proving the point):

If I want to calculate 1/sqrt(x) in a language which only supports +,-,* and /, I would have to realise the 1/sqrt(x) function with something like a look-up table and Taylor polynomials. If I have a high-level language which allows me to write 1/sqrt(x) in plain code, the driver can do that for me, OR if the harware supports it it can do it with a single instruction.

And in general, the ppl who designed the hardware have a pretty good idea about how to optimize their drivers to do the best on that particaular piece of hardware. http://www.opengl.org/discussion_boards/ubb/wink.gif

Zengar
09-26-2001, 06:39 AM
Originally posted by marcus256:

Not many ppl realise this, but the higher level the API/language, the better are the possibilities for an IHV to optimize it for their underlying harware.

I mean, that this sort of languages must be designed so, that driver can easy do that.
In DX shaders, you also have all needed functions(on, not all but lots), like sqrt or so, but the way, whow it's done.
If you write something like
Vector1 = (Vector2+vector3) cross vector4 http://www.opengl.org/discussion_boards/ubb/biggrin.gif
or, even better

Texture1 = ...
Textere Texture2 = ... +... http://www.opengl.org/discussion_boards/ubb/biggrin.gif
I can't imagine how driver can put it ito hardvare. If you have a assembly, set of virtual registers and so, it's much more easy to implement in hardvare. Of course, there is a solution - create GPU( http://www.opengl.org/discussion_boards/ubb/biggrin.gif) that will process rthat script but... is it really needed?

marcus256
09-26-2001, 09:53 PM
I mean, that this sort of languages must be designed so, that driver can easy do that.


Of course! And I think that a C-like language is the way to go (C isn't very high level).



If you write something like
Vector1 = (Vector2+vector3) cross vector4 http://www.opengl.org/discussion_boards/ubb/biggrin.gif
or, even better
Texture1 = ...
Textere Texture2 = ... +... http://www.opengl.org/discussion_boards/ubb/biggrin.gif
I can't imagine how driver can put it ito hardvare.


Can you imagine how C/C++ is compiled into x86 assembly language? I can't, since the x86 ISA is not much better than that of the 6502 (on the C=64), and on the C=64 noone would ever even think about writing anything else than Assembler.

What I mean is that compilers are fairly clever these days.



If you have a assembly, set of virtual registers and so, it's much more easy to implement in hardvare. Of course, there is a solution - create GPU( http://www.opengl.org/discussion_boards/ubb/biggrin.gif) that will process rthat script but... is it really needed?



Well, in my opinion, it is not a bad thing. It's exactly what they are trying to do with OpenGL: set the standard for the future rather than trying to catch up with the current technology. OpenGL 1.x did that. 3D cards today are designed to meet the OpenGL 1.x architecture. Having a standard "shader" language makes it easy to implement hardware that the language maps easily onto (provided that the language is well designed).

If I am allowed to make a bold technology forecast (?): I would think that eventually all graphic cards will have several integrated "full-blown" RISC-like CPUs, with instruction and data caches larger than 1 Kwords, and they can use the shared onboard memory for storing large programs. Infinite possibilities emerge: real harware accelerated ray-tracing, real-time on-chip procedural/algorithmic texture generation, true per-pixel lighting calculations (normal calculations per-pixel) etc. Then it would make sense to use a high-level language like C, wouldn't it?

/Marcus

john
09-26-2001, 10:11 PM
Of course! And I think that a C-like language is the way to go (C isn't very high level).

its hard to work out if this is said in jest, or not. C barely scrapes its knuckles above assembly.

cheers,
John

john
09-26-2001, 10:15 PM
[QUPTE]Can you imagine how C/C++ is compiled into x86 assembly language? I can't, since the x86 ISA is not much better than that of the 6502 (on the C=64), and on the C=64 noone would ever even think about writing anything else than Assembler.[/QUOTE]

huh? the x86 ISA is a freak of a lot more complicated than the c=64 ISA, and that's even when you remove things like, oh, I don't know... MMX instructions, cache fetching stuff and so on. The C=64 DID have BASIC, and I think some other languages, also (including Pascal). THe MAJOR stumbling block of the C=64 is the amount of memory it had.

cheers,
John

Zengar
09-27-2001, 03:18 AM
To have one shader language and to produse GPU's that can process it- it's the great idea.

marcus256
09-27-2001, 03:55 AM
huh? the x86 ISA is a freak of a lot more complicated than the c=64 ISA, and that's even when you remove things like, oh, I don't know... MMX instructions, cache fetching stuff and so on.


Ok, I exaggerated a little bit, hrm. I was mostly thinking of the register situation in the x86 (I'm talking i386 code, excluding SSE etc). Any decent CPU has at least 32 general purpose registers and an equal amount of floating point registers. The x86 has 4 gprs and 8 stack-based FPU "registers", horror! Ok, so the 6502 had 3 "gprs" and no FPU, not the same.



The C=64 DID have BASIC, and I think some other languages, also (including Pascal). THe MAJOR stumbling block of the C=64 is the amount of memory it had.


Well, speed was of course the driving issue - there was no way that any compiler could create as efficient code on the 6502.

Anyway, I feel that we have come a tad off from the topic here. I'm sorry I brought it up in the first place.

But the fact remains - the x86 ISA is a hack-upon-hack extended 8-bit ISA which isn't exactly optimized for any easy compiler design. So my point remains valid (?) - if a C-compiler can create good x86 code, why shouldn't a C-like OGL 2.0 "compiler" be able to produce excellent code for a RISC-like GPU architecture which has been oprimized for it?

/Marcus