PDA

View Full Version : carmack plan + arb2



zed
01-29-2003, 08:00 PM
>>The R300 can run Doom in three different modes: ARB (minimum extensions, no
specular highlights, no vertex programs), R200 (full featured, almost always
single pass interaction rendering), ARB2 (floating point fragment shaders,
minor quality improvements, always single pass).

The NV30 can run DOOM in five different modes: ARB, NV10 (full featured, five
rendering passes, no vertex programs), NV20 (full featured, two or three
rendering passes), NV30 ( full featured, single pass), and ARB2.<<
http://www.bluesnews.com/cgi-bin/finger.pl?id=1&time=20030129210315

what is arb2 is it opengl2? i dont think so but ..

NitroGL
01-29-2003, 08:01 PM
I'm pretty sure he's refering to ARB_vertex_program and ARB_fragment_program.

yakuza
01-29-2003, 08:24 PM
When he says ARB2 he's refering to the third code-path for the R300 described in the first paragraph you show.

PH
01-29-2003, 08:36 PM
It's always interesting to read Carmacks .plan files. The best news was the mention of ARB_vertex_buffer_object. I dropped support for the vendor specific vertex program extensions when the ARB spec was released too and I've been waiting for the array spec ever since.

Another interesting thing he mentions are the issues with floating point buffers. We had a discussion about that not too long ago and is something the ARB needs to address as soon as possible ( ARB_render_target sounds like a resonable name http://www.opengl.org/discussion_boards/ubb/smile.gif ).


[This message has been edited by PH (edited 01-29-2003).]

Coriolis
01-29-2003, 10:06 PM
ARB2 is just the second backend path that uses only ARB extensions, with "higher" extensions. It isn't some OpenGL 2 thing or any super-secret ARB extension set only he has access to.

Tom Nuydens
01-30-2003, 12:28 AM
My physics are a little rusty -- can someone explain to me what eddy currents have to do with 3D graphics?

-- Tom

Nutty
01-30-2003, 04:08 AM
I think he means you'll be able to render the differing intensity lines in the fog, as the air moves about and moves the suspended particles about.

All simulated of course.. but better than just constant intensity volumetric fog.

Nutty

Ostsol
01-30-2003, 05:16 AM
Cool stuff. . . and it's good to see that the R300 can run really well without a whole slew of ATI specific extensions.

*looks at NVidia's latest slew of proprietary extensions* Eek. . .

Tom Nuydens
01-30-2003, 05:58 AM
Thanks Nutty. I thought the term "eddy currents" was only used to describe magnetically induced currents (as in "eddy current brakes"). I also didn't know that "eddy" was a word -- I thought it was the name of the guy who discovered these currents. Not being a native English speaker sucks http://www.opengl.org/discussion_boards/ubb/wink.gif

-- Tom

V-man
01-30-2003, 06:26 AM
Thanks for the link zed. BTW, how to you find this .plan thing?

"eddy currents" is used to describe small turbulance in fluids I think, so it applies to liquids and gases.
That's something I've wanted to do for smoke myself. Of course, to be able to see movements, the air must have varrying densities of smoke/fog in it, othewise your wasting time.

/edit/ Has anyone run across PC magazine. They have written that Doom3 uses DX9, ... bla bla bla. Where did they get that from?

[This message has been edited by V-man (edited 01-30-2003).]

Ostsol
01-30-2003, 07:31 AM
Originally posted by V-man:
Has anyone run across PC magazine. They have written that Doom3 uses DX9, ... bla bla bla. Where did they get that from?

They're probably stuck on the fact that graphics technology is often described in terms of DirectX versions.

SirKnight
01-30-2003, 07:32 AM
One of the other things that interested me in this .plan update was when he talked about the floating point buffers to do HDR. He said something about "post-blending." The exact quote was "High dynamic color ranges are supported internally, rather than with
post-blending." Ok I see that with the floating point 64 and 128bit modes but what is this "post-blending" stuff? A hacky way to get some form of HDR in a card that does not have floating point formats? Could some one explain this a bit. I would like to try it out in my own programs and see what I can get on my GeForce 4 Ti. This .plan was a pretty interesting read. I just wish he would make more technical .plan updates more often. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

Korval
01-30-2003, 01:03 PM
but what is this "post-blending" stuff? A hacky way to get some form of HDR in a card that does not have floating point formats? Could some one explain this a bit.

Presumably, though no one can be certain, he's talking about the post-processing step of making HDR work.

The floating-point framebuffer is just a start. To really make HDR work, you need to go through and find the brightest and darkest pixels. Then, you have to scale all the pixels appropriately from 0 to 1, such that the brightest pixel is 1, the darkest is 0, and the one's inbetween are physically correct as well.

Unfortunately, this is really hard without direct CPU access to the frame buffer. What Carmack wants (he always wants somebody else to do something. He never wants to do any work himself) is for the hardware to handle this, probably in the RAMDAC itself. I don't know why he expected this to be implemented in this revision of hardware, but it was a rather silly expectation.

dorbie
01-30-2003, 04:50 PM
Silly? So how do you send a high precision framebuffer to a 10 bit ramdac? Just taking the MSB seems naive, now is exactly the time for this considering that video LUTs were already commonplace. In anycase it can be done after a fashion, see the ATI & NVIDIA eye candy demos. BTW it's not just about scaling min max to 0-1, that would be underexposed for a lot of scenes and plain wrong w.r.t. your black level most of the time. Detecting what's in the scene using the CPU is not the issue here, that's an easy part of the problem if you have control of your scene, and you may want some physiological adaption response in the loop too rather than just a constant ideal exposure, again possible.

Carmack claims some kind of workaround where he used the framebuffer as a texture and uses fragment arithmetic rather than framebuffer blends to accumulate (multiple passes I assume) to allow the use of an fp framebuffer. It seems like it was an experimental path. Any disappointment with fp framebuffers seem to stem from the lack of fp fragment blend operations.


[This message has been edited by dorbie (edited 01-30-2003).]

Korval
01-30-2003, 05:13 PM
Silly? So how do you send a high precision framebuffer to a 10 bit ramdac?

It's silly to expect it in this revision of hardware. The concept itself is fine, and ultimately will be implemented (either in ATi's r350 or the next real generation of GPU's). Carmack doesn't need to tell us what both us and the hardware developers already know, nor does he need to push hardware vendors to make features that are already on the drawing board.

It was a silly expectation because there are other uses for a floating point buffer as well. The lack of HDR-capable RAMDACs isn't sufficient to hold back a feature like fp buffers. The fact that fp buffers are the only thing that prevents Renderman Shaders from running fully in hardware (via multipass) alone is reason enough to have them, even missing some hardware to make them even more useful.

[This message has been edited by Korval (edited 01-30-2003).]

dorbie
01-30-2003, 05:17 PM
P.S. FWIW I think the internal vs post blending HDR comment refers to internal arithmetic that can scale > 1.0 prior to the final destination blend rather than blending in the framebuffer then boosting or using a LUT. I'd have thought this was a non issue since you could always spend a bit with the left shift approach for individual contributions. Maybe he's talking about the result of successive accumulations of relatively dark contributions, that would start to suffer. Only a high precision framebuffer can solve it.

This could be closely related to the fp texture fragment blend he writes about, although one is experimental. Moving the multipass accumulation blends into a fragment path makes for some interesting possibilities if you have enough instructions & textures for that and your fragment shading.

[This message has been edited by dorbie (edited 01-30-2003).]

dorbie
01-30-2003, 05:19 PM
Korvall, I don't think anyone would disagree with that, not even Carmack. It's only his .plan dude.

rgpc
01-30-2003, 06:26 PM
Originally posted by V-man:
Thanks for the link zed. BTW, how to you find this .plan thing?


http://www.gamefinger.com




/edit/ Has anyone run across PC magazine. They have written that Doom3 uses DX9, ... bla bla bla. Where did they get that from?


Maybe it does - for sound, input and perhaps even network. Or maybe it's been said it runs (best) on DX9 class hardware?

I'm just happy that there is finally going to be a game that will benefit from my GF3 (And from Carmacks .plan it looks like it might even run reasonably... - now that I've finally upgraded from a PIII 733 http://www.opengl.org/discussion_boards/ubb/wink.gif )

If they have real support for 5.1 sound (unlike all the "EAX" games out there) then I'll be as happy as a pig in ...

Ostsol
01-30-2003, 06:32 PM
Originally posted by rgpc:
Maybe it does - for sound, input and perhaps even network. Or maybe it's been said it runs (best) on DX9 class hardware?

I think that's a given -- for Windows version, of course. Versions for other platforms will use the various platform specific non-rendering related features as well.

Coriolis
01-30-2003, 08:16 PM
I doubt they will use directx for networking, since Q3's networking works fine without it. They will probably continue to use sound and mouse input from directx, though you only need like DX3 or maybe DX5 level of support for that.

pkaler
01-30-2003, 09:07 PM
I wouldn't be surprised if they used OpenAL for sound. Despite the fact that hardware drivers are somewhat non-existant. That would require only one codebase for Windows, Linux, and Mac. Plus UT2K3 uses it. So it is proven.

zed
01-30-2003, 09:32 PM
the reason for my original post was because carmack mentioned in a previous plan that he was gonna push opengl2 more (+ was gonna code up a path for doom3 with it)

>>NV10 (full featured, five
rendering passes, no vertex programs)<<
also by pass i assume he means pass per light eg 20 passes for 4 lights

also >full featured< thus its all possible with register combiners

Nutty
01-31-2003, 01:02 AM
There is, AFAIK. But do you see Opengl 2.0 drivers on ATI and nVidia hardware to run it on? I dont think OpenGL 2 will surface until at least the next generation of video cards.

Humus
01-31-2003, 03:18 AM
Full hardware support, well, maybe not. But OpenGL 2.0 drivers are being developed already for current cards even though there's still a long way to go.

dorbie
01-31-2003, 03:24 AM
I heard from a reliable/visionary source that OpenGL 2.0 will not pass as the monolithic upgrade it was originally intended to be. It's seen as too much of a giant pill to swallow and inherently against OpenGL philosophy thus far. What will probably happen is OpenGL 1.5 with significant parts of OpenGL 2.0 spec in as extensions, and compatibility with all earlier releases.

As for shader languages things may look different on the other side of ubiquitous ARB_fragment_program availability.

[This message has been edited by dorbie (edited 01-31-2003).]

tsuraan
01-31-2003, 04:44 AM
Originally posted by V-man:
[B]Thanks for the link zed. BTW, how to you find this .plan thing?


If you're on a *nix machine (or have finger for windows) you can just do 'finger johnc@idsoftware.com' and the idsoftware server will give you his .plan file directly.

davepermen
01-31-2003, 05:10 AM
gl2 is yet here, partially.. at least its thoughts.

there is now
ARB_vertex_program
ARB_fragment_program
soon
ARB_vertex_array_object
and
ARB_render_texture

ogl2.0 wants to make a modern interface, unified for the future. as everyone developed its own interfaces for the same tasks. they start now to move together again, to build one strong opengl. thats what the whole discussion about gl2 started, and this is what we have till now.

and more will come. stuff directly from the gl2 specs will get implemented, etc..

PH
01-31-2003, 05:26 AM
Originally posted by dorbie:
I heard from a reliable/visionary source that OpenGL 2.0 will not pass as the monolithic upgrade it was originally intended to be. It's seen as too much of a giant pill to swallow and inherently against OpenGL philosophy thus far. What will probably happen is OpenGL 1.5 with significant parts of OpenGL 2.0 spec in as extensions, and compatibility with all earlier releases.


This is good news. With the recent talk of a new render-to-texture API, I was worried that things would start to get messy. While there are some good things in the proposed GL2, I think incremental changes to the current GL model is a good thing. I do hope they keep most of the high level shading language. In particular, I hope the driver/compiler will automatically partition complex shaders into multiple passes.

V-man
01-31-2003, 06:09 AM
>>>
I'm just happy that there is finally going to be a game that will benefit from my GF3<<<

It's interesting how many code paths he has created. I think he has 7 paths. How many companies create that much?

>>>If they have real support for 5.1 sound (unlike all the "EAX" games out there) then I'll be as happy as a pig in ...<<<

That's the dolby digital thing? I know near to nothing about sound technology I have to admit.

Ostsol
01-31-2003, 06:55 AM
Originally posted by V-man:
It's interesting how many code paths he has created. I think he has 7 paths. How many companies create that much?

Yeah. . . disregarding the great 3d engines he's created, the fact that he doesn't just code to the lowest common denominator is quite deserving of respect.

Won
01-31-2003, 08:27 AM
The best part is that his "respect" translates to redefining the "lowest common denominator" for those of us with less exposure. I remember when I was working on an OpenGL project, one of the most effective optimizations was to do rendering the way Quake 3 did it in order to hit that nicely optimized, nicely validated driver path. He definitely cuts big paths for us to follow.

-Won

Korval
01-31-2003, 09:54 AM
I remember when I was working on an OpenGL project, one of the most effective optimizations was to do rendering the way Quake 3 did it in order to hit that nicely optimized, nicely validated driver path.

That's not a good thing. That is, instead, a result of driver makers optimizing the Quake III path solely so they can get higher benchmarks. If you want to do something more than Q3 (programs, VAR/VAO, etc) then you have to take the less optimized path. This is not the way drivers should be written.

knackered
01-31-2003, 11:48 AM
Tell that to ATI...

Ostsol
01-31-2003, 11:58 AM
Originally posted by knackered:
Tell that to ATI...

You'd have to go back in time to more than a year ago. No need to tell them anymore. . .

Korval
01-31-2003, 12:20 PM
Tell that to ATI...

Don't just blame ATi for this. nVidia did it long before them.

Once upon a time, the only vertex format that nVidia's CVA's were useful for was the Q3 vertex format.

rgpc
01-31-2003, 12:47 PM
Originally posted by V-man:
It's interesting how many code paths he has created. I think he has 7 paths. How many companies create that much?


Or rather name one other that does that?



That's the dolby digital thing? I know near to nothing about sound technology I have to admit.


Yeah. There's a few different flavours. For example there's just 5.1 (5 speakers, 1 subwoofer). Plus there's EAX (EAX2 or whatever creative is up to) which is environmental effects (such as echos etc). Of course now there's 6.1 (Audigy 2).

I just upgraded to an A7N8X which has nForce and I've noticed that it has these effects built in to it (although I actually use my SB Live! Platinum - it's interesting to see the similarities between the SB product and the nForce).

Oddly enough I finally got to look at Doom3 last night and didn't pay much attention to the sound at all. What I did notice was that I haven't been that scared playing a game in a dark room since playing AvP quite some time ago.

If you can produce a product that effective with the optimized Quake 3 pipeline then why wouldn't you use it? (Incidentally I was quite happy with the performance on the alpha on the default settings on my Gf3)

Won
01-31-2003, 01:43 PM
Korval --

At a time when most OpenGL implementations were highly suspect, having a reasonable "guarantee" that there even existed a reliable driver path was a very big deal. You certainly remember the days of GLSetup, and why they were necessary. Quake3 provided a well enforced minimum standard for OpenGL compliance that other developmers could take advantage of. That's what I was trying to say.

Quake3 isn't as relevant now as it was when I was working on that OpenGL project (no, we didn't name the binary "quake3.exe" as an ATI "optimization" http://www.opengl.org/discussion_boards/ubb/wink.gif). It is not really state of the art, and OpenGL compliance isn't as big of an issue anymore (at least for the basic, non-extension stuff). Now, our standards are higher, as they should be. Example: accelerated vertex array types. Initially, CVAs were fastest only for particular formats, but they eventually were all accelerated.

-Won

roffe
01-31-2003, 03:43 PM
Originally posted by V-man:
>>>
That's the dolby digital thing? I know near to nothing about sound technology I have to admit.


Be aware: OT.

Ahh, that is a shame http://www.opengl.org/discussion_boards/ubb/smile.gif. There are far too few digital signal processing discussions on this board http://www.opengl.org/discussion_boards/ubb/smile.gif. Anyone with an interest in this field, be sure to check out dolby laboratories' and the mpeg's web sites. http://www.dolby.com/ http://mpeg.telecomitalialab.com/


[This message has been edited by roffe (edited 01-31-2003).]

[This message has been edited by roffe (edited 01-31-2003).]

dorbie
01-31-2003, 04:27 PM
Come on the 'minidriver' issue, (as I'll label it), goes all the way back to 3Dfx and voodoo cards. Drivers will ALWAYS be written to be best on popular code paths. This isn't limited to PC cards, it's true even of high end systems like Infinite Reality. Driver engineering effort is a limited resource that SHOULD be spent wisely. Only a very foolish company or perhaps an academic research group would do othewise.

Carmack has done more to aid OpenGL on PC hardware than anyone. He also does more raw work in terms of algorithmic development & R&D in his game engines to push the envelope than most and has been more consistently open with his code and advice than almost anyone else in *real business* in his industry. It's just stunning that someone would suggest he doesn't like doing work himself. It's just funny.

Methinks someone has a chip on their shoulder.


[This message has been edited by dorbie (edited 01-31-2003).]

JustHanging
02-03-2003, 01:34 AM
Originally posted by dorbie:
Maybe he's talking about the result of successive accumulations of relatively dark contributions, that would start to suffer. Only a high precision framebuffer can solve it.

No. Haven't you read this? http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/008447.html
This technique thing solves exactly that, and it works with a normal low precision framebuffer. If you have a sufficent internal precision to apply exposure in a fragment program, it should look perfect. Accumulating light doesn't introduce any precision problems here.

-Ilkka

Korval
02-03-2003, 01:29 PM
At a time when most OpenGL implementations were highly suspect, having a reasonable "guarantee" that there even existed a reliable driver path was a very big deal. You certainly remember the days of GLSetup, and why they were necessary. Quake3 provided a well enforced minimum standard for OpenGL compliance that other developmers could take advantage of. That's what I was trying to say.

The problem with that is that it is, inhierently, limitting to those who don't want to follow in the path of Quake 3. By forcing the driver developers to spend precious time optimizing the Q3 path, those who aren't using that path (like Unreal, or most games that aren't using the Q3 engine) are penalized. Indeed, anyone not thinking in the terms of the Q3 path is penalized.

As such, progress towards improvements over Q3 is limitted. Developers have to balance doing the new and interesting thing with doing the Q3 thing, lest they drop in performance significantly.


Driver engineering effort is a limited resource that SHOULD be spent wisely.

So, you prefer that people who buy games decide what the optimized API codepath should be? Even though these people aren't making a purchasing decision based on an API codepath at all? That's insane, and it limits development for the reasons I stated above.


He also does more raw work in terms of algorithmic development & R&D in his game engines to push the envelope than most and has been more consistently open with his code and advice than almost anyone else in *real business* in his industry.

First of all, how do you know that Carmack personally is that deep into the "algorithmic development & R&D?" For all we know, his programmers could be doing all of the real work. As such, heaping all the credit upon just Carmack is insulting to those who work under him.

Secondly, I don't see what you're talking about. What, in particular, is he doing to "push the envelope" in Doom3? Stencil shadows? These have been possible for quite some time, and the algorithms have been around for several years. Bump mapping? Dot3 has been around since the GeForce 1. Outside of these two effects, Doom 3 isn't that much better than Quake 3.

Granted, these two effects are not much in evidence with other modern games, but that stems more from the fact that most PC games are built for the lowest-common-denominator (which is somewhere around the GeForce1/2 level), not any supposed superiority of ID or Carmack. Any competent developer could do what Carmack is doing; they, however, just don't have the freedom to do so.

fritzlang
02-03-2003, 03:02 PM
Originally posted by Korval:
What, in particular, is he doing to "push the envelope" in Doom3? Stencil shadows? These have been possible for quite some time, and the algorithms have been around for several years. Bump mapping? Dot3 has been around since the GeForce 1. Outside of these two effects, Doom 3 isn't that much better than Quake 3.

Any competent developer could do what Carmack is doing; they, however, just don't have the freedom to do so.

You are joking, arent you?
I agree the technology's been there for a while (GF3 == two years now) but the tragic facts are: where else on the horizon do you see complete, robust, final apps that use all this? 'Stalker' looks promising but thats more atmospheric talent than technology.

Having said this, beware and prepare for MY engine. ;-)

dorbie
02-03-2003, 04:36 PM
Korval, there is unquestionably a chicken & egg driver issue w.r.t. optimization. Pioneers like Carmack with high visibility actually help broaden the fast paths and raise the bar in terms of functionality, not the reverse.

Of the game developers with high visibility apps with the clout to force manufacturers to put something in hardware or make it fast, he has been an important figure. Like it or not.

As for features like robust shadows etc, YES he's first. He was doing it first and the only reason these topics are so hot is because he's announced in advance what he's working on and people are copying him, using the same algorithms. Even if they release first he was first to show this early technology at mac world. Not only that but he was showing it to many developers & artists in the industry at his offices, all the while raising the bar.

MZ
02-03-2003, 05:21 PM
'Stalker' looks promising but thats more atmospheric talent than technology

very OT:
have you read the novel the Stalker game story is based on? (the "Roadside picnic") I found its text fully published here: http://lib.ru/STRUGACKIE/engl_picnic.txt - great SF IMO.
Engine looks good too, Stalker is now my most anticiped game of 2003.




[This message has been edited by MZ (edited 02-03-2003).]

zed
02-03-2003, 07:56 PM
>>Secondly, I don't see what you're talking about. What, in particular, is he doing to "push the envelope" in Doom3? Stencil shadows? These have been possible for quite some time, and the algorithms have been around for several years. Bump mapping? Dot3 has been around since the GeForce 1. Outside of these two effects, Doom 3 isn't that much better than Quake 3.
Granted, these two effects are not much in evidence with other modern games, but that stems more from the fact that most PC games are built for the lowest-common-denominator (which is somewhere around the GeForce1/2 level), not any supposed superiority of ID or Carmack.<<

the lighting equation is a bit more than simple dot3.
but anyways doom3 aint out yet so we cant really discuss what it will be but what we can discuss is quake3 (cause it exists)
it came out in 1999, what other games from that timeperiod look as good?
NONE
what games today support multipleprocessors?
NONE ONLY QUAKE3
the code was 99% tight as, look how well the game scales here in 2003 compared to other games from then eg UT.
+ 100 other points

>> Any competent developer could do what Carmack is doing; they, however, just don't have the freedom to do so.<<

haha! so why dont they, are u saying if the developer said i can make it look like doom3 (on higher cards) but look less on lower cards (using a different rendering path) that the publisher will say NO dont do that we only want to take the low quality path.

i sniff the foul wind of jealously

cass
02-03-2003, 10:53 PM
It's pretty disappointing that there are OpenGL developers out there that don't have the sense to realize that John Carmack is the main reason we have quality PC OpenGL implementations today.

Perhaps you feel his influence was unfairly acquired? It wasn't. He makes his opinions relevant by having well-reasoned opinions and developing relevant games and benchmarks.

Let's please check the sour grapes at the door and keep the discussions technical.

Thanks-
Cass

Mark Kilgard
02-03-2003, 11:27 PM
Originally posted by Korval:
Don't just blame ATi for this. nVidia did it long before them.

Once upon a time, the only vertex format that nVidia's CVA's were useful for was the Q3 vertex format.

Today's NVIDIA drivers do a great job of optimizing pretty much every CVA configuration you can configure.

To be fair, even the earliest NVIDIA drivers that optimized vertex formats for CVA (compiled vertex arrays) supported more than merely the Quake3 vertex formats. The early NVIDIA drivers optimized about 8 or so common paths beyond what Quake3 relied on.

Saying that "the only vertex format that nVidia's CVA's were useful for was the Q3 vertex format" was never correct for the NVIDIA OpenGL driver's CVA support.

- Mark

davepermen
02-03-2003, 11:28 PM
doom3 is not about dot3, its about perpixellighting. a full perpixellighting equation is much more than dot3.

here an idea of a base implementation http://www.ronfrazier.net/apparition/index.asp?appmain=research/advanced_per_pixel_lighting.html

JustHanging
02-04-2003, 01:07 AM
Funny, I thought it was about strong athmosphere, good storytelling (measured on current standards) and intensive action. Silly me.

-Ilkka

Tom Nuydens
02-04-2003, 01:27 AM
Originally posted by JustHanging:
Funny, I thought it was about strong athmosphere, good storytelling (measured on current standards) and intensive action. Silly me.

Touche! http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Tom

PH
02-04-2003, 04:54 AM
Originally posted by Korval:
What, in particular, is he doing to "push the envelope" in Doom3? Stencil shadows? These have been possible for quite some time, and the algorithms have been around for several years.

Well, robust stencil based shadow volumes are easy to implement today, thanks to people like John Carmack. Personally, I doubt I would have thought of using the reverse to solve the near plane capping problem ( I used to use a modified version of Diefenbachs approach but Carmacks was twice as fast ). And without the work done on infinite shadow volumes ( thanks to Cass & Kilgard and Blinn for the "infinite" projection matrix ), things would not have been so easy.

Humus
02-04-2003, 06:23 AM
Originally posted by Mark Kilgard:
Today's NVIDIA drivers do a great job of optimizing pretty much every CVA configuration you can configure.

To be fair, even the earliest NVIDIA drivers that optimized vertex formats for CVA (compiled vertex arrays) supported more than merely the Quake3 vertex formats. The early NVIDIA drivers optimized about 8 or so common paths beyond what Quake3 relied on.

Saying that "the only vertex format that nVidia's CVA's were useful for was the Q3 vertex format" was never correct for the NVIDIA OpenGL driver's CVA support.

- Mark

I am pretty sure Matt once said that only the Q3 vertex format was accelerated with CVA. I did a quick search and found this thread (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/001626.html) . LordKronos' and Matt's comments there are quite interesting.

This is a quote from a nVidia paper (that doesn't seam to be online anymore though on that URL):


7. What do compiled vertex arrays (CVAs) buy me in terms of performance?
Although your mileage may vary, compiled vertex arrays can yield a large increase in performance over other modes of transport specifically, if you frequently reuse vertices within a vertex array, have the appropriate arrays enabled and use glDrawElements. Only one data format is specifically optimized for use within CVAs:

Vertex Size/Type - 3/GLfloat
Normal Type - NONE
Color Size/Type - 4/GLubyte
Texture Unit 0 Size/Type - 2/GLubyte
Texture Unit 1 Size/Type - 2/GLubyte

zed
02-04-2003, 08:44 AM
from an early nvidia pdf

>>6. What does compiled vertex array buy me in terms of performance? Although your mileage may vary, compiled vertex arrays can yield a large increase in performance over other modes of transport specifically, if you frequently reuse vertices within a vertex array, have the appropriate arrays enabled and use glDrawElements. Currently, the only vertex array format accelerated by compiled vertex arrays is t2f/t2f/c4ub/v3f (meaning, two sets of s/t floating point texture coordinates, colors specified by 4 unsigned bytes and x/y/z floating point vertices). In the future, more data formats may be accelerated depending upon need.<<

which does tie in with what i experience in either 1999 or 2000 on my tnt. ie only the quake3 format was greatly improved with CVA.
personally i dont think theres anything wrong with this (obviously not all formats can run optimally to start off with ) as long as everyone knows that doing such + such will have maximum benifit

Korval
02-04-2003, 10:19 AM
haha! so why dont they, are u saying if the developer said i can make it look like doom3 (on higher cards) but look less on lower cards (using a different rendering path) that the publisher will say NO dont do that we only want to take the low quality path.

The publisher will say, "I don't really care how you get it done, just get a game that runs on <insert target platform> done by <insert target data that's way too soon to implement lots of paths>."

ID games are published by Activision exclusively. In order to buy that, quite lucritive, exclusivity, Activision is almost certainly willing to give Id whatever release schedual they want. How many other game developers have that kind of clut with their publishers? Maybe Blizzard, but Warcraft 3 was their first 3D game.

Nobody else has the freedom to have a development schedual that the actual developers set. That is why Carmack gets time to experiment and come up with ideas and have robust code. It's not that he, personally, is so much better than everyone else, not to say that he doesn't have skills. It's simply that he and his people are in a unique position to actually fully utilize those skills. Other developers have deadlines to worry about.


Perhaps you feel his influence was unfairly acquired? It wasn't. He makes his opinions relevant by having well-reasoned opinions and developing relevant games and benchmarks.

"Well-reasoned opinions"? Certainly Carmack has good opinions, but so do plenty of us too. Hardware vendors, however, don't afford us the same clout that Carmack gets. Indeed, hardware vendors don't afford that clout to actual university graphics researchers who tower above Carmack in their knowledge of the field. He gets that clout for only one reason: his, "developing relevant games and benchmarks." How fair this is depends on whether or not you consider that game and benchmark performance is a relevant factor towards driver development. I contend that, ideally, it is not and should not be.


personally i dont think theres anything wrong with this (obviously not all formats can run optimally to start off with ) as long as everyone knows that doing such + such will have maximum benifit

Let's say you want to develop a game. The choice of whether or not to use vertex lighting should be strictly your own personal choice, yes? Not in this case.

By wanting vertex lighting, and therefore normals, you have to use a non-Q3 format. As such, if you wanted to go beyond Q3 in terms of lighting, you had to take a much more unoptimized path, in addition to the fact that vertex lighting slows down vertex processing. As such, for a non-insignificant amount of time, developers were forced to take the optimized path and forgo the use of vertex lighting.

Indeed, only relatively recently have these restriction been truly relaxed (nVidia's performance PDFs for the GeForce 3 were the first to publically remove the CVA and VAR format restrictions). And, of course, it takes the industry a while to adapt to the relaxed restriction.

V-man
02-04-2003, 10:43 AM
>>>Indeed, only relatively recently have these restriction been truly relaxed (nVidia's performance PDFs for the GeForce 3 were the first to publically remove the CVA and VAR format restrictions). And, of course, it takes the industry a while to adapt to the relaxed restriction.<<<

So what are you complaining about?

Carmack made some nice games, stepped up the popularity of GL, and asks for some requirements. And what's more, he is just a game engine developer.
Not some big pushy company like MS.

IMO, He has done very well.

You don't like, too bad.

Tom Nuydens
02-04-2003, 11:15 AM
Originally posted by Korval:
Indeed, only relatively recently have these restriction been truly relaxed

More than two years, three chipsets and a few dozen driver revisions ago is "recent" to you?

-- Tom

pkaler
02-04-2003, 11:49 AM
Write an engine. Write benchmarks. Send them to devrel (or specific people at your favorite vendor). That is probably the most constructive thing that most posters in this forum can do. You may now return to your regularly scheduled forum.

[B]DIE THREAD DIE[\b]


[This message has been edited by PK (edited 02-04-2003).]

MichaelNewman
02-04-2003, 04:20 PM
Korval>>>>Indeed, hardware vendors don't afford that clout to actual university graphics researchers who tower above Carmack in their knowledge of the field. He gets that clout for only one reason: his, "developing relevant games and benchmarks."<<<<

First of all, I doubt many so-called "actual university graphics researchers" would claim that their knowledge in the field towers above Carmack's. Secondly, Carmack is probably more of a researcher than a lot of claimed university researchers. Finally, doesn't someone in the games industry (who is actually implementing stuff) deserve more "clout" than a researcher (who is inventing new stuff) with graphics card manufacturers.
Afterall, what good would a graphics card be if it supported a whole bunch of *stuff* not implemented in games?