PDA

View Full Version : Nvidia Cg toolkit



Nutty
06-13-2002, 04:15 AM
http://www.nvnews.net/articles/cg_toolkit/cg_toolkit.shtml

Looks V. interesting.. Any idea when it's coming out? http://www.opengl.org/discussion_boards/ubb/smile.gif

Nutty

zeckensack
06-13-2002, 04:25 AM
First I want to know "What the hell is it?"

If it's a shader compiler (likely, but still no conclusive descriptions out there), then, ummm, what's the point?

kehziah
06-13-2002, 04:33 AM
What about OGL 2.0 shading language? Will NVIDIA once again try to impose its own standard while other IHVs are trying to agree on a common solution?

davepermen
06-13-2002, 04:38 AM
not at all interesting.. why can't nvidia follow the standarts of the others? they can't do it with the exts, now they can't with gl2.0..

i guess thats what nvidia once said about "we develop an own glide, a modern one for modern gpu's"

well.. imho, its bull****

Robbo
06-13-2002, 04:40 AM
I suggest this would not be the case. It is entirely reasonable to expect the compiler to spit out GL2.0 shader symbols rather than NVIDIA ones. The interesting thing about this is that it sits on top of DirectX and OpenGL, rather than just OpenGL.

davepermen
06-13-2002, 04:44 AM
it doesn't sit on top, it sits beside. and i dont need a 3rd api. really not.i dont get more features, i dont get more power, i dont get stuff that i could not get before.

just support gl2.0. why? because then we can code for EVERYONE, not just for nvidia. if i have to code for nvidia only, i want to see money from nvidia, for sure.

pocketmoon
06-13-2002, 05:39 AM
http://www.cgshaders.org/shaders/VertexNoise/

I see that a new dedicated site, cgshaders.org , is up already. The .org extension I suppose indicates a non-proprietary leaning.

By strange co-incidence the 'feature shader' is a vertex noise shader!

I wonder if Cg is flexible enough to allow me to port my "128 instruction Two Octaves 3D Noise with Surface Normals" vertex prog!

Remember folks, If anyone offers you a sub-standard 1 Octave 3D noise shader tell them you can get better elsewhere http://www.opengl.org/discussion_boards/ubb/wink.gif

Rob J.

LordKronos
06-13-2002, 05:44 AM
Nutty, you beat me to it http://www.opengl.org/discussion_boards/ubb/smile.gif

Anyway, I think Cg looks halfway interesting. From reading about it, my take on it is that it will theoretically allow you to write a single instructions set and compile it to DirectX 8, DirectX 9, register combiners/vertex programs, and OpenGL 2. That would be a tremendous improvement over the current state of things. But in order for this to become something more than a sort of NV-GLIDE, we need other IHVs to support it. Until that happens, nvidia calling this "the new industry-standard Cg language" is nothing but a joke. Unfortunately, a glance at their list of "Companies Supporting Cg" does not yet include any of the IHVs.

I havent had time to look at it in detail yet. I did look at a few source files and it looks pretty nice. What Im not yet sure about is how this all goes into your app. Does it stay as source code and get compiled by the runtime? Does it get compiled to a platform-neutral bytecode? Will the code be able to dynamically upgrade to new platforms (such as if you ship it in an app today, then when OpenGL 2 or DirectX9 ships it will automatically use whatever is available). How do you deal with things like instruction set/number differences between hardware (say I ship a program using it today, and another IHV suddenly supports it a few months from now, but has a fixed set/number of instructions).

It looks like we still end up writing for multiple targets. The main advantage (assuming other IHVs support it) is just that we can use the same language for nvidia/ATI/matrox/etc opengl and for various versions of directX too. For the time being (until DX9/OGL2 support in hardware is mainstream) we wont have to write less code paths, just learn fewer languages/instruction sets.

[This message has been edited by LordKronos (edited 06-13-2002).]

LordKronos
06-13-2002, 06:01 AM
In thinking more about it, I am actually starting to realize that perhaps the largest benefit of Cg is that you can write shaders in a high level language. I know that's pretty obvious, and its something that nvidia is pointing out, but I guess it didnt sink in for me because I personally am pretty comfortable with things like assembly language. Writing shaders in assembly or using register combiners is not a hold up for me. However, for a lot of programmers, it is. I always thought register combiners were perfectly fine, but in speaking to some nvidia guys a few years ago they said their biggest complaint was that a lot of developers were having trouble learning or getting comfortable with combiners. Same thing happens in assembly, a lot of programmers dont get the concept of a limited number of registers. They have X number of registers, but need 5x number of temporary variables. Sharing the registers among their "variables" just doesnt click with them. Being able to program in a high level language will let a lot more poeple do it.

Wait a minute. I dont want that to happen...it makes my skills LESS valuable http://www.opengl.org/discussion_boards/ubb/smile.gif

santyhamer
06-13-2002, 06:01 AM
Hey, NVIDIA, what about the OGL 2.0 shading language? Deal to impose your standards in an .ORG WITHOUT the consent of other companies like ATI, SGI or Matrox???

I think the Shader-language war is started...

jra101
06-13-2002, 06:25 AM
Originally posted by LordKronos:
What Im not yet sure about is how this all goes into your app. Does it stay as source code and get compiled by the runtime? Does it get compiled to a platform-neutral bytecode? Will the code be able to dynamically upgrade to new platforms (such as if you ship it in an app today, then when OpenGL 2 or DirectX9 ships it will automatically use whatever is available).

Currently you can do both. The SDK includes a Cg compiler that you can use for offline compilation which outputs a vertex program for GL, a vertex/pixel shader for DX which you can then load as you normally would.

There is also a runtime you can use to dynamically compile your shaders and allocate constant register memory. So if you are using the runtime your shaders would automatically take advantage of future hardware.



[This message has been edited by jra101 (edited 06-13-2002).]

LordKronos
06-13-2002, 06:34 AM
I think the Shader-language war is started

I can see it now. Jen-Hsun Huang, in his evil alter ego...Darth Graphious. With his sinister plan to build the most powerful army of graphics shaders, and constructing a new super-powerful home base from which to run his empire (cass, matt, did you guys ever move into that HUGE new building that was being built?)

Robbo
06-13-2002, 06:38 AM
I think that if this `tool' will allow compilation to OpenGL 2.0 shader code then it will be compatible with other OpenGL 2.0 supported cards and if it compiles to D3D shader code then once again, D3D supporting cards will\should end up with the same results.

I cannot see how this is a bad thing!

Nutty
06-13-2002, 07:14 AM
daveperman, your posts of late have begun to get really negative and boring real fast.


because then we can code for EVERYONE, not just for nvidia

If you'd actually taken the time to read any of it, then you'd see that they want it to be an industry standard and work not just for NV hardware.

The fact that it helps developers produce graphics for _both_ OpenGL and DX, means better for everyone, perhaps it'll even tempt more Games developers to use OpenGL more, instead of just DX.


i dont get more power, i dont get stuff that i could not get before.

Well actually you do. Although OpenGL 2 has a very nice shading language, it doesn't exactly help systems that dont have an OpenGL 2 implementation. Whereas Cg works with OpenGL 1.4 (ARB_Vertex_Program/Shader), and DX8, and even vendor specific extensions like NV_Vertex_Program.

I'll have a look at the demo stuff later.

Nutty

[This message has been edited by Nutty (edited 06-13-2002).]

BillyBOb
06-13-2002, 07:22 AM
Isn't the cg toolkit suppose to spit out "stardard" code? I thought I read that somewhere. I thought that meant it would output, or at least have the ability to output stardard opengl calls. If it does do that then it's just a nice tool to create shaders, else Nvidia is trying to create a whole different stardard--which would be bad.

AdrianD
06-13-2002, 08:07 AM
i've downloaded this toolkit and readed the specs...
The CG language is a C-like languange which produces vertexshaders/fragmentshaders, which you can use in your own application.
What it produces, depends of the so called "profile". if you use a DX8 profile, it produces dX8 shaders, if you use a opengl-vertexprogram-profile it produces opengl-vertexprograms. So it's designed to support any hardware/api with the corresponding profile. when ATI decides to make an ATI profile, the cg-compiler will produce ATI code.(and i hope they will do it)
So the produced output is allways bound to an specific hardware, but the cg-sources are not.(in most cases, because the capabilitys of the language can be limited by a profile)
that's good enough for me. it's like coding for diffent plattforms: my c code can be compiled on any CPU too.

I'll give it a chance.

GeLeTo
06-13-2002, 08:14 AM
What bothers me is this quote posted on the cgshaders forum:
"Nvidia agrees strongly with advancing OpenGL, but thinks the Cg approach is better. We need to advance the existing OpenGL, not create a radical new OpenGL."

3D Labs and ATI are behind the OpenGL 2.0 spec and it is most likely that they (especialy 3D Labs) will not support Cg shaders.

I've looked at the spec and like it, but I don't find anything that can't be done with OGL2.0. I would REALLY want to hear what exactly is so different with the Cg shaders? Do they have some functionality that OGL2 does not have? Do they expose some functionality in a way that makes it much easier to use than in OGL2.0?

Sure, some next-gen hardware may not be able to support all of the vertex/fragment shader functionality in OGL2, but that's fine, if I want to use their hardware I will restrict my shaders to the functionality that their hardware supports. If my shader does not compile on that certain hardware I will use a fallback shader.

I really hope that NVidia will expose the Cg shaders using the standard OGL2 function calls (e.g. Create/Use/DeleteShaderObject, VertexAttrib, LoadParameter...), so that programmers can use whatever (Cg AND! OGL2) shaders they want. I would be very disappointed if they give us a different API that does esentialy the same thing, just because they like it that way.
The same is true for the other parts of OpenGL2 - there's no reason why NVidia should not be able to implement most of the OpenGL Objects, the synchronisation and memory manadgement - even on their old hardware. I do not want to code two codepaths to store my vertex arrays on the card memory no mater how much the NVidia engineers are confident that their their extentions to do the same thing are vastly superior to everything else.

I don't mind the Cg shaders at all - especialy if they will allow us to take beter advantage of the NVidia hardware and will enable better interoperatability between OGL and D3D apps. BUT if NVidia tries to implement OGL2 functionality using ONLY their own identical and properitary extentions I (and probably any other OGL developer that values his time) am going to be very irritated.

Julien Cayzac
06-13-2002, 08:17 AM
Read in the article Nutty refered to:

Cg was developed with participation from Microsoft and the initial release will work with all programmable graphics processors that support DirectX 8 or OpenGL 1.4.

Reading between the lines, it's clear ARB_vertex_program is now complete and shipping... http://www.opengl.org/discussion_boards/ubb/smile.gif

Julien.

ScottManDeath
06-13-2002, 08:23 AM
Hi

I read the cg spec, it sounds intersting. I think this is the ideal intermediate step befor gl2.0 comes out. So I will be able to write my shader and I'll know that they will be running on a gf1 (vp20) up to a gf4.When the other vendors support cg also it will be no problem to run them on other hw as atis.

But what is when there is only a TNT2 as a graphics card. Will it be supported at least at the vertex programming level?. I think this can be done on the CPU (as on a GF1) with setting the modelview/projection matrices to identity and doinf the pervertex math your self, or there will be an extension.
What dou you think ?

Bye
ScottManDeath

Gorg
06-13-2002, 08:28 AM
I am downloading the toolkit, so I don't now right now where I stand, but one thing I am sure of is that OGL2.0 is not just around the corner, and if you are doing game development, in 2-3 years, you will still need to support Geforce4 maybe 3, so if there is a nice common language for the meantime(before OGL2.0 is on most vid cards) then it is potentially a good thing.

LordKronos
06-13-2002, 09:31 AM
After reading through some more stuff, I think my second post hit it on the head. The main advantage is the high level aspect of things rather than assembly. Looking through some of the testimonials from developers that have been working with Cg, half of them go something like this:
"Right now we have 1 or 2 people that can read and write shaders, but with Cg everyone will be able to work with it"

Of course, thats a bit optimistic of a statement. There still is the high bar of having to understand a lot of the math that goes on. Then again, if one developer write some Cg subroutines for transforming into tangent space and so on, then all that the other developers need to know is that they need to transform into tangent space. The wont need to know how to do it, and they wont need to worry about take the assembly instructions someone else provided and mixing those in with other instructions, and worrying about temp register collision, etc.

Now another thing is that this is Cg 1.0, and I read that they will be shipping Cg 2.0 with the launch of the NV-30. Why is this? If they are just upgrading it to add an NV-30 profile, thats well and fine. But if they will be adding new features to the language to support the NV-30, then things are entirely wrong and its not likely other IHVs will adopt Cg because it will be too would favor nvidia too much. Basicly, the only way I see it succeeding as an "industry standard" will be if it is fully featured enough in version 1 to support several years worth of graphics cards.

That said, even if it doesnt become industry standard, at least it will ease development for the nvidia side of things, having one language for openGL 1.4/2.0 and DirectX 8/9.

One thing I noticed though...I read the specification and it mentions the "fp20 [profile] for compiling fragment programs to NV2Xs OpenGL API". Then the spec goes on to give detailed descriptions of directx8 vertex shaders, directx8 pixel shaders, and opengl vertex programs. No detailed description for the fragment programs. What happened?

zed
06-13-2002, 09:54 AM
Originally posted by LordKronos:
In thinking more about it, I am actually starting to realize that perhaps the largest benefit of Cg is that you can write shaders in a high level language. I know that's pretty obvious, and its something that nvidia is pointing out, but I guess it didnt sink in for me because I personally am pretty comfortable with things like assembly language. Writing shaders in assembly or using register combiners is not a hold up for me. However, for a lot of programmers, it is. I always thought register combiners were perfectly fine, but in speaking to some nvidia guys a few years ago they said their biggest complaint was that a lot of developers were having trouble learning or getting comfortable with combiners. Same thing happens in assembly, a lot of programmers dont get the concept of a limited number of registers. They have X number of registers, but need 5x number of temporary variables. Sharing the registers among their "variables" just doesnt click with them. Being able to program in a high level language will let a lot more poeple do it.


i thought the main benifit with a high level language is 'speed benifits' eg with cpus interesting to note a lot of the assembly 'tricks' from a couple of years ago actually run slower than the C version written then + compiled today (ie the asm is set in stone the higher level language aint)
multiple that by about a factor of 5 (about the rate graphics hardware seems to advancing compared to cpu's http://www.opengl.org/discussion_boards/ubb/smile.gif )

btw ive said this at least 10x in the last couple of years dont bother learning pixelshaders cause the syntax will be soon superceeded, well heres further proff.


Wait a minute. I dont want that to happen...it makes my skills LESS valuable

im right behind u ehor http://www.opengl.org/discussion_boards/ubb/frown.gif

edit- i agree with davepermen (+ others) this being a 'standard' is a bad idea

BIG QUESTION how much input from sources outside of nvidia went into the design of this?

also something noones mentioned i dont think ms will be to happen about this wrt d3d, i know relations between ms + nvidia aint been going to good recently for a while (perhaps because of the xbox failure?)

to lighten the topic i just madeup a joke http://www.opengl.org/discussion_boards/ubb/smile.gif

Q whats the difference between the xbox + the dodo?
A the dodo managed to hold out for a few years

ok ok feel free to shoot me (offer only open for 24 hours)


[This message has been edited by zed (edited 06-13-2002).]

Nutty
06-13-2002, 11:19 AM
BIG QUESTION how much input from sources outside of nvidia went into the design of this?

also something noones mentioned i dont think ms will be to happen about this wrt d3d, i know relations between ms + nvidia aint been going to good recently for a while


AFAIK, it was a joint development from Nvidia and M$.


edit- i agree with davepermen (+ others) this being a 'standard' is a bad idea

Why? Standards are good. Ppl seem to imply that NV is trying to steal ppl away from GL2's shading system. It isn't. Cg allows you to create a single "shader", that compiles down to GL2, GL1.4, DX8, DX9, and across _multiple_ architectures. simply by chaging the profile at compile time. The original source shader stays the same. Not all hardware is going to able to support GL2 completely. Cg allows you to still utilize a single shading language, for current hardware, that will be mainstream for quite a while.

Nutty

Nutty

ScottManDeath
06-13-2002, 11:50 AM
Hi

I think I will play with it, perhaps I will use it for my game,where I'm writing at the time the base.It would be great if ATI write ASAP a profile so I could rise the minimal graphics card to a GF3/Radeon 8500 http://www.opengl.org/discussion_boards/ubb/wink.gif without writing mutiple code pathes.

As mentioned by others before I'm missing a profile for Register combiners/texture shaders?(I could write a pixel shader and pass it to nvparse, but that is not sense of the exercise). Perhaps they should make the profile system open source so others[not only IHV's] can write a backend for their purpouses.
Perhaps it is a little bit silly, but what about writing a fragment backend for ARB_texture_env_combine so you could write a shader that makes TNT2/Rage users happy http://www.opengl.org/discussion_boards/ubb/wink.gif ?

Bye
ScottManDeath

newt
06-13-2002, 12:31 PM
This does mean that you have to compile your shader for each target graphics card, let alone for each platform (Win32, Linux, Mac etc). You'd also have to compile for nVidia, ATI, Matrox, 3Dlabs and any other new vendors on the horizon, for each card model which you intend to support.

OpenGL 2.0 sounds the way to go in the future.

There are other high level shading languages out there. Take your pick.

Nutty
06-13-2002, 12:35 PM
No, you compile your shader at run-time, not development time.

You can compile at development time if you wish, but you then have to compile for all the various hardware like you said.

Run-time compilation is suggested as the preferred method, then you dont have this problem. Also when new hardware ships, your shader will automatically compile to the new hardware, instead of being hard-coded into using older features.

Nutty

SirKnight
06-13-2002, 12:57 PM
I've been playing with this new Cg stuff for a while now and all i have to say is GOOD JOB NVIDIA!! I think it's great. I mean writing the vertex programs in the assembly and stuff wasn't to bad, i did like it a lot. But heck, now with Cg, i like writing shaders even more. http://www.opengl.org/discussion_boards/ubb/smile.gif The only thing that interested me in OpenGL 2.0 was how shaders where written (in a C like language), but now with this, i could care less about OpenGL 2.0. Cg will work from a GF 3 on up RIGHT NOW, and hopefully on other cards (like ATI and stuff) like which is what i think they were also trying to get at with this. It looks like some here are not as excited about it as i am, but i guess you cant please everyone. I am actually suprised some has said negative things about it, this is a pretty darn powerful thing here. Once the next gen cards come out that have a more powerfull programable GPU, this Cg language will be even more awesome. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

Nutty
06-13-2002, 01:04 PM
Nice to see someone who thinks it's good! http://www.opengl.org/discussion_boards/ubb/smile.gif

I really hope ATI, Matrox and 3DLabs get their profiles built for this sharpish. No more writing vast amounts of GPU assembler for every target card out there.

All we need now is a Cg debugger. http://www.opengl.org/discussion_boards/ubb/smile.gif

Nutty

zeckensack
06-13-2002, 01:15 PM
So the conclusion is, they upgraded nvparse and turned it into big news. Exciting, isn't it ...

I suspect at least some 'inspiration' for that has been gleaned from the GL2 shader compiler sources.

[flame bait]
They'd better fix their 'fragment shaders'. If they had an interface that could come even close to ATi's, there wouldn't be any need for this anyway.
[/flame bait]

WhatEver
06-13-2002, 01:18 PM
OMG, I just found out about this on another site...and just yesterday I was talking about C style shader programming and BAM! They release this! I'M VERY VERY EXCITED!!!

If any of you are interested, I have put up a resource section at Spider3D just for this sort of thing: http://www.spider3d.com/html/resources__.html

I can't wait to try out this new vertex program generator http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif LOL

Mazy
06-13-2002, 01:34 PM
Well, i like it, but the shader that will come with ogl2.0 is also ment to be able to compile to dx.. (they announced a 'competition' about that on the suggestions forum a while ago)

it had been better to use that syntax to the max instead of a new one..

SirKnight
06-13-2002, 03:21 PM
All we need now is a Cg debugger. http://www.opengl.org/discussion_boards/ubb/smile.gif


YES! That is exactly what we need now. I have always wanted a debugger for vertex_programs and the register_combiners but now with Cg, I think a debugger is more likely.



So the conclusion is, they upgraded nvparse and turned it into big news. Exciting, isn't it ...


Um...no. Cg is much more than an "upgraded nvparse."

-SirKnight

SirKnight
06-13-2002, 03:30 PM
Hmmm...I'm not sure if this would be possible but it would be pretty cool if we could somehow use the VC++ debugger with Cg. But then again, I dont know if that's possible or even if it was, how it would work out.

-SirKnight

WhatEver
06-13-2002, 05:03 PM
That would be VERY cool SirKnight!

I thought the nvbrowser(or whatever it's called) had debugging capabilities.

V-man
06-13-2002, 05:07 PM
So how useful do you think Cg will be anyway? Does this mean you can write something in a couple of lines instead of 50?

I will have to download and try this one out for sure. Looks very atractive.

Also, I think the idea is pretty obvious. Being API, GPU independent and being free of extensions. gl2 is suppose to solve the later 2 but being API free is kind of new.

V-man

JD
06-13-2002, 06:49 PM
Where's Ati with their plugin? Or are radeons 8500 extreme rarity around here http://www.opengl.org/discussion_boards/ubb/smile.gif

zeckensack
06-13-2002, 07:02 PM
Originally posted by JD:
Where's Ati with their plugin? Or are radeons 8500 extreme rarity around here http://www.opengl.org/discussion_boards/ubb/smile.gif

Present! http://www.opengl.org/discussion_boards/ubb/smile.gif

JD
06-13-2002, 08:22 PM
LOL http://www.opengl.org/discussion_boards/ubb/smile.gif

ffish
06-13-2002, 08:42 PM
Originally posted by GeLeTo:
What bothers me is this quote posted on the cgshaders forum:
"Nvidia agrees strongly with advancing OpenGL, but thinks the Cg approach is better. We need to advance the existing OpenGL, not create a radical new OpenGL."


Here's a link to where this quote probably came from:
http://www.extremetech.com/article/0,3396,apn=8&s=1017&a=28051&app=6&ap=7,00.asp

NVIDIA don't like OpenGL 2.0? I don't like the way Kurt talks about OpenGL 1.4 and OpenGL 1.5 like they are official releases. Am I wrong in thinking that they are just NVIDIA releases or is the ARB really moving to progress the 1.x line? IMHO effort should be targeted towards OpenGL 2.0 rather than 1.4,5,6 etc. I really liked the way OpenGL 2.0 was progressing and believed that was the future. Of course, the interview may not express official NVIDIA views but I'm guessing it probably does.

Don't get me wrong, I really like NVIDIA products and especially developer support and I will certainly play around with Cg. However, after reading all of the (numerous) articles and discussion online, I have a feeling of unease. I think I would have preferred Cg was never released - I was happy to wait for OpenGL 2.0.

Anyway, the future's looking interesting!

davepermen
06-13-2002, 09:41 PM
Originally posted by SirKnight:
YES! That is exactly what we need now. I have always wanted a debugger for vertex_programs and the register_combiners but now with Cg, I think a debugger is more likely.
nvparse debugged vertexprograms and registercombiners..

Um...no. Cg is much more than an "upgraded nvparse."
why? its a highlevel runtime shader compilers for currently nvidia-only hardware. soon for all gl1.4 hw and afterwards for the rest.
but nvparse did the same, just less "complete".
(that doesn't want to bitch it down, its a great nvparse http://www.opengl.org/discussion_boards/ubb/wink.gif)

to all. just one thing about the runtime shader compiler.
do you really want as a game developer that all your sources are opensource? if you compile at runtime everyone can read your whole gpu-part of the engine. while it would be great to see the source of a doom3 for example, i dunno how you would like it that your shaders get copied around everywhere.. you created a famous effect, and before you can say "piepiep" everyone uses it with no effort himself.
i'm not against opensource but well.. its just some idea..


another thing.
nvidia should learn how to create small stuff.. 80mb this time.. how fun with 56k..

davepermen
06-13-2002, 11:31 PM
so. now i'm allowed to put some statements.
i've read now the 150page spec.
never thougth i can read through 150 pages that fast..
have to admit, i like the design. its quite okay. i would prefer if they would support the gl2.0 language directly and make a compiler for vp instead of writing an own one, but well..

other thing i would love:
programable pixelshader in gl on nv10 hw.. because rc's could be used there anyways and it would be nice if they would adobt this. oh, and the nv10 vp could be much bether than the nv20 vp, as it is in software it could support branching as well, and huge arrays for doing texturelookups and and and http://www.opengl.org/discussion_boards/ubb/wink.gif oh no, thats matrox, isn't it? http://www.opengl.org/discussion_boards/ubb/smile.gif

well then..
will there be a fragmentToPixel program as well? programable blending unit would be sweet, including stenciling, depthtesting etc.. i know there is not much programability there, but its enough for supporting it. (its about the same as the texture_shader "programability" so what?)

anyone having a cheap gf3 around?

Nutty
06-13-2002, 11:40 PM
if you compile at runtime everyone can read your whole gpu-part of the engine.

Theres nothing stopping you encrypting your shader files, and decrypting them on load.


anyone having a cheap gf3 around?

I'll sell you my GF4, when Nv30 comes out http://www.opengl.org/discussion_boards/ubb/smile.gif

Nutty

davepermen
06-14-2002, 12:20 AM
yeah, encripting and decripting.. would be sweet to compile down to a intermediatelanguage, as java, as .net, as gl2.0 does, wich then can be sent.. so i only have to store that binary intermediatelanguage..

anyways, its not important for me, as i'm not a big payed gamecompany.. if someone steals my stuff that could only be good (say if carmack does http://www.opengl.org/discussion_boards/ubb/wink.gif)

when you get your nv30 i dont need a gf4 anymore cause then i have a) an r300 or b) an nv30 or c) something else myself..

i just need something for this bridge..
nvidia. if i say i'll buy a nv30, would you spend me a gf3 till then?

zed
06-14-2002, 12:40 AM
>>nvidia should learn how to create small stuff.. 80mb this time.. how fun with 56k..<<

i should email this to nvidia but it tends to fall on deaf ears there,

but anyways i agree with dave 80mb (whats the date today june14th hopefully i will have it download by the end of the month)
2 suggestions

A/ break the file up into smaller pieces 20mb is ok (getting to the limit though)
B/ i assume it uses zip, use some other better compression method eg bz2 then i will only have to download 60-65mb, when is zip gonna die http://www.opengl.org/discussion_boards/ubb/frown.gif

davepermen
06-14-2002, 12:45 AM
whats the use of zipping fat stuff? its fat anyways, even while not that fat anymore.
bether they should try to not blow up their stuff.. all those sdk's and such are just plain stupid imho. they could step back and code demos, as they did if you look at the stuff of 2000 and so. that was fine. a demo with 100k, one with 5mb (because of the models and textures) etc..

what we need in the cg toolkit is a .lib, a .h and the installationfile. that should fit in a mega or two. we need the documentation wich is 1mb, and the helpfiles wich should not expand one or two mega and could even be online html. the demos should not use fat textures and only some models, that means the datapackage is another 5mb possibly, and the demos itself can be small exes and small sources

it should fit in 20mb the whole stuff. at most.

i dunno as i havent downloaded. we'll see what there is all in..

EDIT: actually it takes even long to download here in the company, where we have a quite fast connection.. longer than downloading the old demos with a modem.. thats the part of evolution i hate.. we have slow connections => all try to make small stuff.. connections get faster => stuff gets faster. in the end, no difference..
stop that please!
i have ice age on 200mb currently, a whole 80min movie. i dont want to have a simple compiler in the same size.. (and yes it looks quite okay, VHS quality it is, and yes i know its illegal but a) i watched it in cinema yet and b) i'll buy the dvd when its out, so what?)

[This message has been edited by davepermen (edited 06-14-2002).]

LordKronos
06-14-2002, 02:32 AM
Originally posted by davepermen:
do you really want as a game developer that all your sources are opensource? if you compile at runtime everyone can read your whole gpu-part of the engine.

Ever heard of GL trace? With a little bit of extension, it would be completely trivial to upgrade GL trace to intercept the commands that feed the shaders into the driver. Thats all you need to find out what commands are going in. Sure, they may only get the low level version of the code instead of the Cg version, but its not like that has ever stopped people/companies from reverse engineering before. And as someone else said, you can encrypt the code too. If someone is clever enough decrypt your source, they are probably clever enough to reverse engineer the compiled shader anyway.

davepermen
06-14-2002, 02:42 AM
how about some other compilationplatform?

currently there are
glvertexprogram
dxvertexshader
dxpixelshader

soon there will be
glpixelprogram or so

then there will be gl1.4vertex and pixelshader

then dx9 vs and ps.

then gl2.0

well.. if we are on the way, i would like to have a next one:

cpu

you define an in-buffer, and an out-buffer, and with this language you can process vertex-data. that way cg would expose to be a vertex-c-language for streaming data processing in general meaning.
it would be too cool..

but i think thats too much for nvidia, not? first get it working on other gpus...

davepermen
06-14-2002, 03:13 AM
Originally posted by LordKronos:
Ever heard of GL trace? With a little bit of extension, it would be completely trivial to upgrade GL trace to intercept the commands that feed the shaders into the driver. Thats all you need to find out what commands are going in. Sure, they may only get the low level version of the code instead of the Cg version, but its not like that has ever stopped people/companies from reverse engineering before. And as someone else said, you can encrypt the code too. If someone is clever enough decrypt your source, they are probably clever enough to reverse engineer the compiled shader anyway.

yeah sure, but i dont see _THAT_ often stuff reverse enignered on programs (and you can do it here as well, and if there are .dll's its even quite simple to find the general structures due to named functions). it would save from the ones that are more newbieish and just want to rip stuff to look cool (newbies always want code from us, if they could get it for free they would take it and dont look at any licence..)

davepermen
06-14-2002, 04:01 AM
http://developer.nvidia.com/dev_content/cg/cg_examples/pages/soft_stencil_shadows.htm

VERY sweet. but not in the 80mb file.

tarantula
06-14-2002, 04:04 AM
API independent shader language is cool, but it would be nice if the shader language is integrated with the API.
I'd hate to see very less people using OpenGL 2.0's Shader Language.

Can someone please tell me if I can run an opengl vertex program or the Cg on a TNT?(yeah there still are people using a TNT http://www.opengl.org/discussion_boards/ubb/smile.gif ...in software mode.

davepermen
06-14-2002, 04:11 AM
if you have NV_vertex_program in, yes you can (easy to check, not?)

wimmer
06-14-2002, 04:14 AM
Originally posted by davepermen:
how about some other compilationplatform?

currently there are
glvertexprogram
dxvertexshader
dxpixelshader

soon there will be
glpixelprogram or so


Talking about Cg 2.0 (this was asked before in this thread)...

With vertex and pixel shaders (programs) we are not quite there yet. The next big thing will probably be _primitive_ programs, i.e., things like freely programmable NURBS or subdivision tesselators.

If anyone is wondering how they whipped all of this stuff out of the blue, take a look at http://graphics.stanford.edu/projects/shading/. According to Kurt Akeley, Bill Mark is one of the developers of Cg. Guess who's one of the chief developers of the Stanford shaders http://www.opengl.org/discussion_boards/ubb/smile.gif




cpu

you define an in-buffer, and an out-buffer, and with this language you can process vertex-data. that way cg would expose to be a vertex-c-language for streaming data processing in general meaning.
it would be too cool..

but i think thats too much for nvidia, not? first get it working on other gpus...


In a way, that's what's going to happen. In his invited talk at GI2002 David Kirk said the future lies in "stream processors". That's a little different from current CPU's because they waste a lot of space (>70% ?)with cache memory. With GPUs, you can use all of the silicon for computation if you adhere to the concept of freely programmable "stream processors" with a certain input and output bandwidth, and load balancing to keep all the processors happy...

Michael

Nutty
06-14-2002, 04:15 AM
http://developer.nvidia.com/dev_content/cg/cg_examples/pages/soft_stencil_shadows.htm
VERY sweet. but not in the 80mb file.


Dunno what 80 meg file you're looking at, but the 80meg toolkit I downloaded did have that demo in. It's in the Cg browser.

Considering most of this stuff is targeted at game developers, they're prolly not bothered about ppl whinging it takes along time to download on a 56k modem. I dont know of any games company that doesn't have a fat pipe. Buy broadband dave! In England you can get broadband for cheaper than flat-rate dial-up!!

Nutty

davepermen
06-14-2002, 04:19 AM
stop bitching personally rich okay? not in the forums. come online then http://www.opengl.org/discussion_boards/ubb/smile.gif

i will have broadband soon
anyways its stupid to download such files if you could realtime run the setup with broadband instead. even on broadband it takes time to download that. thats not why i want broadband. not to just get 10 times bigger files than before broadband was standart. really not.

EDIT: thanks anyways, i've found it now

EDIT 2: and now? where is the SOURCE of the whole? i dont need the shadow-volume expansion, i want to see how they do the soft shadow part.. THAT is not in, or is it, richy?

[This message has been edited by davepermen (edited 06-14-2002).]

[This message has been edited by davepermen (edited 06-14-2002).]

Carmacksutra
06-14-2002, 05:32 AM
CgToolkit\Direct3D\DX8\src\demos_CG\SoftShadows1\

BTW: did you noticed these?
CgToolkit\OpenGL\lib\Debug\RegComParser.lib
CgToolkit\OpenGL\lib\Debug\TextureShaderParser.lib

davepermen
06-14-2002, 05:47 AM
Originally posted by Carmacksutra:

CgToolkit\Direct3D\DX8\src\demos_CG\SoftShadows1\

thanks http://www.opengl.org/discussion_boards/ubb/wink.gif



BTW: did you noticed these?
CgToolkit\OpenGL\lib\Debug\RegComParser.lib
CgToolkit\OpenGL\lib\Debug\TextureShaderParser.lib
i think those are the nvparse thingies, not?

Gorg
06-14-2002, 08:18 AM
I was reading the page where they list all the companies that support Cg, and I was suprised to not find Epic nor Id Software.

Interesting.

zed
06-14-2002, 12:59 PM
http://www.theregister.co.uk/content/54/25732.html
commentary taken from 'the register'
written by someone who works on competing technology thus should be taken with a grain of salt

>>No break, continue, goto, switch, case, default<<

huh! what a failing
cg hasnt even planned for next years (or even this years hardware)
i have a feeling cg will be updated every 6 months

to use it u will be forced to write
if ( cg_version == 2 )
....
else if ( cg_version == 3 )
...
else // version 1
...

haust
06-14-2002, 01:59 PM
an article titled : Why Nvidia's Cg won't work (http://www.theregister.co.uk/content/54/25732.html)
for me, cg main problem is ogl2.0 since both will fight on the same ogl field, but andrew richards has other serious arguments to put in the balance....

[This message has been edited by haust (edited 06-14-2002).]

davepermen
06-14-2002, 02:13 PM
Originally posted by zed:
cg hasnt even planned for next years (or even this years hardware)
i have a feeling cg will be updated every 6 months

to use it u will be forced to write
if ( cg_version == 2 )
....
else if ( cg_version == 3 )
...
else // version 1
...

what do you except? as long as the hardware vendors dont find to a general solution on how to implement the shaders in HARDWARE there will be no solution in SOFTWARE to support it. there has to be a final x86 spec for gpu's, before that, there will be no real shaderlanguage. thats why i prefer gl2. it does define THAT DO WE NEED FINALPOINT. not important what now is out and what not. they SET a standart wich is not yet here, but now everyone works for getting to this standart. cg on the other hand wants to build a standart around current existing hardware, wich is a fine thing. but its not at all a holy grail, else no one would care about gl anymore and we would all stick at dx. because dx in fact does the same. every version they set the standarts and everyone tries to support them. the results are caps and versions for each gpu. the fixed function pipeline is standartised very well. thats why it works everywhere the same (sure, there ARE exts but not really that much)

i just want this for shaders.
we'll see the future. at least, we soon have a general vertexshader in gl. took quite long for the arb http://www.opengl.org/discussion_boards/ubb/smile.gif

oh, and i dont like the idea at all to have shaders/scripts/strings to set up the gpu. why? because in the end i plan to use the gpu as a general streaming processor for my own stuff.. for this i want some asm or function-interface. its just much more handy than doing such realtimecompilations. a highlevel language around it? no problem. but i want a VERY BASIC base interface. means functions to set it up (sort of what ati did). the base has to stay lowlevel. thats my thought.

(and if you have functional setup, you can generate with generic/metaprogramming a nice interface DIRECTLY in c++ in. in the form of

VSvertex vpos = vsGetInput(GL_VERTEX_ARRAY);
VSvertex nrml = vsGetInput(GL_NORMAL_ARRAY);
VSmatrix to_screen = vsGetInput(GL_MVP_EXT);
VSvertex opos = vsGenerate(GL_VERTEX_VARIABLE);
opos = to_screen * vpos;

in such a way..

now _THAT_ is a highlevel interface. and the general c++ compiler gets this down to the simple functioncalls..

am i only dreaming or is this burning an eternal flame? close your eyes.. give me your hand darling do you see my heart bleeding, do you understand? do you feel the same

hm.. this just went now trough my head. i really need to go to bed http://www.opengl.org/discussion_boards/ubb/wink.gif

bye http://www.opengl.org/discussion_boards/ubb/wink.gif

jwatte
06-14-2002, 02:14 PM
zed,

From that article: "No pointers"

This tells me one of two things. Either the article author is really grasping for straws in trying to diffuse the issue, or he doesn't understand (or want to accept) the difference between vertex generation/issue and vertex shading/processing.

Gc claims to address fragment/pixel shading and vertex shading. It explicitly does not claim to address vertex issue/processing, which is inherently a different kind of process than vertex shading. Meanwhile, the article author is blaming Cg for not letting him move the scene graph onto the GPU, using pointers. Well, what point is there in designing language features that won't be in the majority of hardware for the next two years, or longer?

Now, it's true that there are some interesting tricks you can pull with the VUs on Playstation 2. But if you're going there, you're putting yourself on a very rickety ladder with a power saw right next to it, seeing the large number of common, "standard" pixel features the PS2 is missing.

IT
06-14-2002, 03:37 PM
If the Cg compiler doesn't break up operations into multiple passes/operations automatically on hardware/apis that only support 128 instructions in the vertex shader, then what's the point?

One still has to program for the lowest common denominator graphics card then. (Yah, I know multiple variations of a shader could be used to support all 469 different GPUs out there.) So essentially one is left with a prettier version of the current assembly-like vertex program code.

I guess what I'd rather see is a compiler that doesn't throw an error message if the 128 instruction limit is reached (of current apis), but rather generates code to render it correctly anyway. (However, this may not be possible.)

Why not just implement Renderman in hardware and be done with this? Some graphics card companies like to compare their current technology to Monsters, Inc. or Toy Story anyways.

Or better yet, like Dave said, a general x86-like instruction set so any type of api can be written.


[This message has been edited by IT (edited 06-14-2002).]

JD
06-14-2002, 06:06 PM
It would be nice if Nvidia,Ati,3DLabs told us when they expect to have Opengl2 finished. Then everyone here would be more inclined to use Cg when they find out that gl2 will be ready a year or more from now. As it is right now, people don't want to switch despite Cg hype if something better will come out in few weeks. Likewise MS should announce when dx9 should be out. According to rumors dx9 will be out late this year possibly early next year. Fine, I'll stick to asm shaders if that is the case. However, I don't know when gl2 will be out. I thought it was going to be out soon because Ati's lack of cooperation on Cg and because 3DLabs released shader specs for gl2 months ago. But now Cg came out which means Nvidia doesn't believe gl2 will be out soon either that or they don't care about gl2 thus 3DLab's concern about future of gl.

Won
06-14-2002, 06:59 PM
Yeah, I don't know what was up with that Codeplay guy on The Register. I don't think he understands that Cg is designed for dataflow, SIMD GPUs, not your Athlons and Pentiums.

Not that I'm a fan of Cg -- I'm not. I think that Cg is a pretty bald-faced attempt to indoctrinate people into the NVIDIA way, but that article wasn't much better. That guy was obviously pimping VectorC because he percieves Cg as a threat. Whatever.

One thing to consider about Cg is that it wasn't designed in a vacuum. GPU hardware is optimized for a particular form of computation -- that's why a 300MHz GPU part can outperform a 2GHz CPU in 3D graphics computation. GPUs make an important tradeoff between computational density and flexibility.

However, it's not like we shouldn't be getting the best of both worlds in the near future. I'm quite excited about OGL2 and the accompnaying hardware.

-Won

velco
06-15-2002, 12:06 AM
Originally posted by JD:
But now Cg came out which means Nvidia doesn't believe gl2 will be out soon either that or they don't care about gl2 thus 3DLab's concern about future of gl.

Hmm, am I the only one notice the pattern of using market dominance in one area to obtain market dominance on another, where the company's technology is clearly inferior ?

Cg was created with collaboration with M$, right ?

Nutty
06-15-2002, 11:11 AM
My 1st program in Cg..
http://www.nutty.org/ScreenShots/refract.jpg

In actual fact it turned out to be more fiddly for me to get stuff correct than with straight VP assembler.. Lots of times it simply wouldn't load due to lots of silly mistakes. Getting the hang of it now tho.

But being able to do cubemapping in 1 line of code is well nice.

OUT.TEX0 = reflect(-eye, normal); http://www.opengl.org/discussion_boards/ubb/smile.gif

Hurry up and release GL fragment profile!! http://www.opengl.org/discussion_boards/ubb/smile.gif

Nutty

robert
06-15-2002, 01:14 PM
Nutty, if you don't mind would i be able to look at your code.. I am learning cg aswell, and need lots of examples [ http://www.opengl.org/discussion_boards/ubb/smile.gif]

Nutty
06-15-2002, 01:32 PM
Sure,
http://www.nutty.org/OtherZips/CgTest1.zip

There is a little library I wrote in a zip inside it, which is basically just for setting the window up n stuff. The Cg code is in the .cg file. It's in the dev studio project.

Any probs mail me.

Nutty

robert
06-15-2002, 06:39 PM
thanks, i just wanted some more examples and yours proved to be very nice [ http://www.opengl.org/discussion_boards/ubb/smile.gif].

Thanks!

R.vanWijnen
06-16-2002, 09:50 AM
Nutty,

Are you going to make some nice tutorials for Cg? I always liked your tutorials ;-)

You don't seem to update your site very often, busy I guess.

R.

Nutty
06-16-2002, 02:36 PM
Thanks. Unfortunately I never seem to find the time to do much coding at home these days. Hopefully this will change soon, as well as new site stuff. http://www.opengl.org/discussion_boards/ubb/smile.gif

Nutty