Nvidia Cg toolkit

http://www.nvnews.net/articles/cg_toolkit/cg_toolkit.shtml

Looks V. interesting… Any idea when it’s coming out?

Nutty

First I want to know “What the hell is it?”

If it’s a shader compiler (likely, but still no conclusive descriptions out there), then, ummm, what’s the point?

What about OGL 2.0 shading language? Will NVIDIA once again try to impose its own standard while other IHVs are trying to agree on a common solution?

not at all interesting… why can’t nvidia follow the standarts of the others? they can’t do it with the exts, now they can’t with gl2.0…

i guess thats what nvidia once said about “we develop an own glide, a modern one for modern gpu’s”

well… imho, its bull****

I suggest this would not be the case. It is entirely reasonable to expect the compiler to spit out GL2.0 shader symbols rather than NVIDIA ones. The interesting thing about this is that it sits on top of DirectX and OpenGL, rather than just OpenGL.

it doesn’t sit on top, it sits beside. and i dont need a 3rd api. really not.i dont get more features, i dont get more power, i dont get stuff that i could not get before.

just support gl2.0. why? because then we can code for EVERYONE, not just for nvidia. if i have to code for nvidia only, i want to see money from nvidia, for sure.

http://www.cgshaders.org/shaders/VertexNoise/

I see that a new dedicated site, cgshaders.org , is up already. The .org extension I suppose indicates a non-proprietary leaning.

By strange co-incidence the ‘feature shader’ is a vertex noise shader!

I wonder if Cg is flexible enough to allow me to port my “128 instruction Two Octaves 3D Noise with Surface Normals” vertex prog!

Remember folks, If anyone offers you a sub-standard 1 Octave 3D noise shader tell them you can get better elsewhere

Rob J.

Nutty, you beat me to it

Anyway, I think Cg looks halfway interesting. From reading about it, my take on it is that it will theoretically allow you to write a single instructions set and compile it to DirectX 8, DirectX 9, register combiners/vertex programs, and OpenGL 2. That would be a tremendous improvement over the current state of things. But in order for this to become something more than a sort of NV-GLIDE, we need other IHVs to support it. Until that happens, nvidia calling this “the new industry-standard Cg language” is nothing but a joke. Unfortunately, a glance at their list of “Companies Supporting Cg” does not yet include any of the IHVs.

I havent had time to look at it in detail yet. I did look at a few source files and it looks pretty nice. What Im not yet sure about is how this all goes into your app. Does it stay as source code and get compiled by the runtime? Does it get compiled to a platform-neutral bytecode? Will the code be able to dynamically upgrade to new platforms (such as if you ship it in an app today, then when OpenGL 2 or DirectX9 ships it will automatically use whatever is available). How do you deal with things like instruction set/number differences between hardware (say I ship a program using it today, and another IHV suddenly supports it a few months from now, but has a fixed set/number of instructions).

It looks like we still end up writing for multiple targets. The main advantage (assuming other IHVs support it) is just that we can use the same language for nvidia/ATI/matrox/etc opengl and for various versions of directX too. For the time being (until DX9/OGL2 support in hardware is mainstream) we wont have to write less code paths, just learn fewer languages/instruction sets.

[This message has been edited by LordKronos (edited 06-13-2002).]

In thinking more about it, I am actually starting to realize that perhaps the largest benefit of Cg is that you can write shaders in a high level language. I know that’s pretty obvious, and its something that nvidia is pointing out, but I guess it didnt sink in for me because I personally am pretty comfortable with things like assembly language. Writing shaders in assembly or using register combiners is not a hold up for me. However, for a lot of programmers, it is. I always thought register combiners were perfectly fine, but in speaking to some nvidia guys a few years ago they said their biggest complaint was that a lot of developers were having trouble learning or getting comfortable with combiners. Same thing happens in assembly, a lot of programmers dont get the concept of a limited number of registers. They have X number of registers, but need 5x number of temporary variables. Sharing the registers among their “variables” just doesnt click with them. Being able to program in a high level language will let a lot more poeple do it.

Wait a minute. I dont want that to happen…it makes my skills LESS valuable

Hey, NVIDIA, what about the OGL 2.0 shading language? Deal to impose your standards in an .ORG WITHOUT the consent of other companies like ATI, SGI or Matrox???

I think the Shader-language war is started…

Originally posted by LordKronos:
What Im not yet sure about is how this all goes into your app. Does it stay as source code and get compiled by the runtime? Does it get compiled to a platform-neutral bytecode? Will the code be able to dynamically upgrade to new platforms (such as if you ship it in an app today, then when OpenGL 2 or DirectX9 ships it will automatically use whatever is available).

Currently you can do both. The SDK includes a Cg compiler that you can use for offline compilation which outputs a vertex program for GL, a vertex/pixel shader for DX which you can then load as you normally would.

There is also a runtime you can use to dynamically compile your shaders and allocate constant register memory. So if you are using the runtime your shaders would automatically take advantage of future hardware.

[This message has been edited by jra101 (edited 06-13-2002).]

I think the Shader-language war is started

I can see it now. Jen-Hsun Huang, in his evil alter ego…Darth Graphious. With his sinister plan to build the most powerful army of graphics shaders, and constructing a new super-powerful home base from which to run his empire (cass, matt, did you guys ever move into that HUGE new building that was being built?)

I think that if this `tool’ will allow compilation to OpenGL 2.0 shader code then it will be compatible with other OpenGL 2.0 supported cards and if it compiles to D3D shader code then once again, D3D supporting cards will\should end up with the same results.

I cannot see how this is a bad thing!

daveperman, your posts of late have begun to get really negative and boring real fast.

because then we can code for EVERYONE, not just for nvidia

If you’d actually taken the time to read any of it, then you’d see that they want it to be an industry standard and work not just for NV hardware.

The fact that it helps developers produce graphics for both OpenGL and DX, means better for everyone, perhaps it’ll even tempt more Games developers to use OpenGL more, instead of just DX.

i dont get more power, i dont get stuff that i could not get before.

Well actually you do. Although OpenGL 2 has a very nice shading language, it doesn’t exactly help systems that dont have an OpenGL 2 implementation. Whereas Cg works with OpenGL 1.4 (ARB_Vertex_Program/Shader), and DX8, and even vendor specific extensions like NV_Vertex_Program.

I’ll have a look at the demo stuff later.

Nutty

[This message has been edited by Nutty (edited 06-13-2002).]

Isn’t the cg toolkit suppose to spit out “stardard” code? I thought I read that somewhere. I thought that meant it would output, or at least have the ability to output stardard opengl calls. If it does do that then it’s just a nice tool to create shaders, else Nvidia is trying to create a whole different stardard–which would be bad.

i’ve downloaded this toolkit and readed the specs…
The CG language is a C-like languange which produces vertexshaders/fragmentshaders, which you can use in your own application.
What it produces, depends of the so called “profile”. if you use a DX8 profile, it produces dX8 shaders, if you use a opengl-vertexprogram-profile it produces opengl-vertexprograms. So it’s designed to support any hardware/api with the corresponding profile. when ATI decides to make an ATI profile, the cg-compiler will produce ATI code.(and i hope they will do it)
So the produced output is allways bound to an specific hardware, but the cg-sources are not.(in most cases, because the capabilitys of the language can be limited by a profile)
that’s good enough for me. it’s like coding for diffent plattforms: my c code can be compiled on any CPU too.

I’ll give it a chance.

What bothers me is this quote posted on the cgshaders forum:
“Nvidia agrees strongly with advancing OpenGL, but thinks the Cg approach is better. We need to advance the existing OpenGL, not create a radical new OpenGL.”

3D Labs and ATI are behind the OpenGL 2.0 spec and it is most likely that they (especialy 3D Labs) will not support Cg shaders.

I’ve looked at the spec and like it, but I don’t find anything that can’t be done with OGL2.0. I would REALLY want to hear what exactly is so different with the Cg shaders? Do they have some functionality that OGL2 does not have? Do they expose some functionality in a way that makes it much easier to use than in OGL2.0?

Sure, some next-gen hardware may not be able to support all of the vertex/fragment shader functionality in OGL2, but that’s fine, if I want to use their hardware I will restrict my shaders to the functionality that their hardware supports. If my shader does not compile on that certain hardware I will use a fallback shader.

I really hope that NVidia will expose the Cg shaders using the standard OGL2 function calls (e.g. Create/Use/DeleteShaderObject, VertexAttrib, LoadParameter…), so that programmers can use whatever (Cg AND! OGL2) shaders they want. I would be very disappointed if they give us a different API that does esentialy the same thing, just because they like it that way.
The same is true for the other parts of OpenGL2 - there’s no reason why NVidia should not be able to implement most of the OpenGL Objects, the synchronisation and memory manadgement - even on their old hardware. I do not want to code two codepaths to store my vertex arrays on the card memory no mater how much the NVidia engineers are confident that their their extentions to do the same thing are vastly superior to everything else.

I don’t mind the Cg shaders at all - especialy if they will allow us to take beter advantage of the NVidia hardware and will enable better interoperatability between OGL and D3D apps. BUT if NVidia tries to implement OGL2 functionality using ONLY their own identical and properitary extentions I (and probably any other OGL developer that values his time) am going to be very irritated.

Read in the article Nutty refered to:

Cg was developed with participation from Microsoft and the initial release will work with all programmable graphics processors that support DirectX 8 or OpenGL 1.4.

Reading between the lines, it’s clear ARB_vertex_program is now complete and shipping…

Julien.

Hi

I read the cg spec, it sounds intersting. I think this is the ideal intermediate step befor gl2.0 comes out. So I will be able to write my shader and I’ll know that they will be running on a gf1 (vp20) up to a gf4.When the other vendors support cg also it will be no problem to run them on other hw as atis.

But what is when there is only a TNT2 as a graphics card. Will it be supported at least at the vertex programming level?. I think this can be done on the CPU (as on a GF1) with setting the modelview/projection matrices to identity and doinf the pervertex math your self, or there will be an extension.
What dou you think ?

Bye
ScottManDeath

I am downloading the toolkit, so I don’t now right now where I stand, but one thing I am sure of is that OGL2.0 is not just around the corner, and if you are doing game development, in 2-3 years, you will still need to support Geforce4 maybe 3, so if there is a nice common language for the meantime(before OGL2.0 is on most vid cards) then it is potentially a good thing.