Programable pipeline...

If anyone can help i’d really appreciate it.
P.S i have googled alot of these but to little sucess!

I’ve numbered them so you can just stick the number at the start of the answer!

  1. Does anyone know who invented the idea of vertex programs and bypassing the T&L section of the pipeline?

  2. When did this idea come about?

  3. Was it something that nvidia decided to do or did ATI do it first or did someone else come up with it?

  4. Does anyone know of a ‘history’ of recent (ie since programable pipelines existsed) developments in these technologies?

  5. Did the programability develop from DirectX or OpenGL?

  6. What was the first game to use the programable pipeline?

  7. Who came up with Matrix Palette skinning?

  8. Was it suggested at siggraph? If so what year and by whom?

  9. What game first used Matrix Palette Skinning for its characters thru hardware?

  10. Is Nvidias Cg better than ATIs RenderMonkey ? Why?

thanks for any answers or comments in advance.

i’ll continue googling aswell!

thanks again
tony

HEre are my two cents…

1/2- FP/VP programs are not new into professional graphic market. I heard that SGI many years ago had a video card (or maybe it was a whole rendering system, don’t know) called pixelFlow. Looks like that card had enough programmability to run FP but I am not sure. The idea itself is really old, I don’t know who invented it but I actually think it was only re-discovered. By reading the GL specs about FP/VP you get a different idea of what a FP/VP is… The idea you get is that VP/FP always existed… Looks strange but in fact, the specs draws a figure in which the standard OpenGL pipeline is a “hardcoded” sequence of VP/FPs… Don’t know if this is clear enough for you, I suggest to read the spec for NV_vertex_program, maybe you can get the idea.

3- As far as I know, the first video card exposing vertex programmability was NV20 (GeForce3). Fragment programmability is a complex subject, I don’t know who made it before. Nvidia has a lot of extensions that can do it but their cards are bad at FPs, ATI has a extension which I don’t know the name and, BTW, I really hate it becouse I don’t like the interface at all. My ideas are blurry here… I would pose my word on ati but I am not sure.

4- Vertex programs began with 1.0 then, a 1.1 version was made. This version adds a couple of new instructions and an option which tells the card to transform vertices using the standard opengl pipe (so you can mix standard GL xform and vps). Maximum program lenght is about 128 instructions if I am not wrong (a little less for position-invariant VPs). With NV30 a new version (2.0) is out but I have not read it yet. The most relevant new feaures are branch and subroutines. Program lenght is also really exteded to 1024 lines of code and up to 64k instructions (counting loops), but I am not sure of those values. A lot of new intructions are introduced. There’s also a ARB vertex program spec which is between NV_vertex_program1_1 and NV_vertex_program2 but I cannot remember much of this. Sintax is quite different, introduces temporaries registers and other things… There are a lot of other vertex_program related things like EXT_vertex_program and stuff like that but I never read it… Again, my ideas about FPs are quite blurry, I won’t say nothing here but the first nv extension about these is a nv30 extension…

5- Don’t know. If I am not wrong the first Dx with VPs was 8.0, which was made consulting nvidia, which had a NV_vertex_program extension ready to go in GL, but don’t know what launched it first. Maybe I have not got the question…

6- Not sure, maybe the first running in HW is aquanox…

7- Don’t know.

8- I have (had?) a paper presented at siggraph which is called “a user programmable vertex program engine” or something like that… Don’t know the year, maybe 2000 by Mark Kilgard… You can find it on the NV developer area if you search.

9- Don’t know.

10- Cg is a whole shading language while Rendermonkey is a program to preview VP/FPs using various languages. A comparison cannot be made.

What a long post!
Bye!

First of all you have to realize that a lot of stuff has been done before in software and accelerating in hardware is a natural progression.

1/2) Been done for years in software. Check out Renderman.

3/4) Check dates of nv_vertex_program and ext_vertex_shader and others. http://oss.sgi.com/projects/ogl-sample/registry/

  1. Compare dates of above with release of DirectX8

  2. Don’t know

Real-time Rendering p 53

While the exact origin of the algorithm presented here is unclear, defining bones and having a skin react to changes is an old concept in computer animation.

Refers to:
Magnenat-Thalmann et al. “Joint-Dependent Local Deformation for Hand Animation and Object Grasping” Graphics Interface '88, pp.26-33, June 1988.

  1. Erik Linholm, Mark J. Kilgard, Henry Moreton. “A User-Programmable Vertex Engine”. Siggraph 2001

  2. I believe Half-Life did, but not thru hardware.

  3. Apples and Oranges.

Check out Computer Graphics: Principles and Practices. Foley, Van Dam, Feiner, Hughes. If it is not in there, it’ll have a reference.

And let me reiterate, stuff being done in hardware has been done long ago in software.

Pixel Flow was a collaboration between HP UNC-Chapel Hill and a small company in Bristol called Division. HP were late to the party (I used to work for Division), but invested most of the cash to make it work in the end. It had nothing to do with SGI, except that it was intended to compete with their graphics products.

Pixel flow was just the latest in a series of scalable programmable graphics architectures called Pixel Planes, developed at UNC Chapel hill. Fundamentally the Pixel Planes chips were massively parallel 8 bit simple SIMD ALUs, graphics algorithms were implemented for the instruction set of these processors.

After making a couple of demo Pixel Flow systems HP cancelled the project and never mass produced it.

Part of the little company in Bristol England became a company called Pixel Fusion and continued to invest in derivitive graphics chip designs for PCs. Eventually they bailed on that idea and focused on parallel networking chips (I think). Dunno if they’re even around anymore.

Marc Olano’s PhD dissertation “A Programmable Pipeline for Graphics Hardware” is just one example of the work on programmable hardware on that project:
http://www.cs.unc.edu/~olano/papers/dissertation/

  1. Somewhat inverted question.

Programmable graphics was well established early on. See Myer and Sutherlands “On the Design of Display Processors” from Communications of the ACM, Vol. 11, no. 6, June, 1968.

So well established that Segal and Akeley had to explain why OpenGL was NOT programmable. “The Design of the OpenGL Graphics Interface.” Fixed Function was the new thing.

For programmable shading, Rob Cook “Shade Trees”, Porter and Duff “Compositing Digital Images” and Ken Perlin’s “An Image Synthesizer” all laid the foundation in 1984 and 1985.

  1. SPACEWAR, at MIT in 1961, DEC PDP-1. Or MOONLANDER, by Jack Burness on a DEC GT40, l980-ish.

am I correct in saying that the eraly power VR chips were very highly programmable?

Probably you are not.

Anyway, I also heard about interesting functionalities in the early powerVR chips… well, looks like those functionalities were so interesting they have been forgotten…

I am really interested about Dorbie’s reply. I am also taking a look at the link you provided (not too seriously however).

Looks like the so called “PixelFlow” was really suited for procedural rendering so, can I say it was (functionality-wise) comparable to today’s high-end consumer graphic chips?

[This message has been edited by Obli (edited 03-05-2003).]