carmack plan + arb2

>>The R300 can run Doom in three different modes: ARB (minimum extensions, no
specular highlights, no vertex programs), R200 (full featured, almost always
single pass interaction rendering), ARB2 (floating point fragment shaders,
minor quality improvements, always single pass).

The NV30 can run DOOM in five different modes: ARB, NV10 (full featured, five
rendering passes, no vertex programs), NV20 (full featured, two or three
rendering passes), NV30 ( full featured, single pass), and ARB2.<<
http://www.bluesnews.com/cgi-bin/finger.pl?id=1&time=20030129210315

what is arb2 is it opengl2? i dont think so but …

I’m pretty sure he’s refering to ARB_vertex_program and ARB_fragment_program.

When he says ARB2 he’s refering to the third code-path for the R300 described in the first paragraph you show.

It’s always interesting to read Carmacks .plan files. The best news was the mention of ARB_vertex_buffer_object. I dropped support for the vendor specific vertex program extensions when the ARB spec was released too and I’ve been waiting for the array spec ever since.

Another interesting thing he mentions are the issues with floating point buffers. We had a discussion about that not too long ago and is something the ARB needs to address as soon as possible ( ARB_render_target sounds like a resonable name ).

[This message has been edited by PH (edited 01-29-2003).]

ARB2 is just the second backend path that uses only ARB extensions, with “higher” extensions. It isn’t some OpenGL 2 thing or any super-secret ARB extension set only he has access to.

My physics are a little rusty – can someone explain to me what eddy currents have to do with 3D graphics?

– Tom

I think he means you’ll be able to render the differing intensity lines in the fog, as the air moves about and moves the suspended particles about.

All simulated of course… but better than just constant intensity volumetric fog.

Nutty

Cool stuff. . . and it’s good to see that the R300 can run really well without a whole slew of ATI specific extensions.

looks at NVidia’s latest slew of proprietary extensions Eek. . .

Thanks Nutty. I thought the term “eddy currents” was only used to describe magnetically induced currents (as in “eddy current brakes”). I also didn’t know that “eddy” was a word – I thought it was the name of the guy who discovered these currents. Not being a native English speaker sucks

– Tom

Thanks for the link zed. BTW, how to you find this .plan thing?

“eddy currents” is used to describe small turbulance in fluids I think, so it applies to liquids and gases.
That’s something I’ve wanted to do for smoke myself. Of course, to be able to see movements, the air must have varrying densities of smoke/fog in it, othewise your wasting time.

/edit/ Has anyone run across PC magazine. They have written that Doom3 uses DX9, … bla bla bla. Where did they get that from?

[This message has been edited by V-man (edited 01-30-2003).]

Originally posted by V-man:
Has anyone run across PC magazine. They have written that Doom3 uses DX9, … bla bla bla. Where did they get that from?

They’re probably stuck on the fact that graphics technology is often described in terms of DirectX versions.

One of the other things that interested me in this .plan update was when he talked about the floating point buffers to do HDR. He said something about “post-blending.” The exact quote was “High dynamic color ranges are supported internally, rather than with
post-blending.” Ok I see that with the floating point 64 and 128bit modes but what is this “post-blending” stuff? A hacky way to get some form of HDR in a card that does not have floating point formats? Could some one explain this a bit. I would like to try it out in my own programs and see what I can get on my GeForce 4 Ti. This .plan was a pretty interesting read. I just wish he would make more technical .plan updates more often.

-SirKnight

but what is this “post-blending” stuff? A hacky way to get some form of HDR in a card that does not have floating point formats? Could some one explain this a bit.

Presumably, though no one can be certain, he’s talking about the post-processing step of making HDR work.

The floating-point framebuffer is just a start. To really make HDR work, you need to go through and find the brightest and darkest pixels. Then, you have to scale all the pixels appropriately from 0 to 1, such that the brightest pixel is 1, the darkest is 0, and the one’s inbetween are physically correct as well.

Unfortunately, this is really hard without direct CPU access to the frame buffer. What Carmack wants (he always wants somebody else to do something. He never wants to do any work himself) is for the hardware to handle this, probably in the RAMDAC itself. I don’t know why he expected this to be implemented in this revision of hardware, but it was a rather silly expectation.

Silly? So how do you send a high precision framebuffer to a 10 bit ramdac? Just taking the MSB seems naive, now is exactly the time for this considering that video LUTs were already commonplace. In anycase it can be done after a fashion, see the ATI & NVIDIA eye candy demos. BTW it’s not just about scaling min max to 0-1, that would be underexposed for a lot of scenes and plain wrong w.r.t. your black level most of the time. Detecting what’s in the scene using the CPU is not the issue here, that’s an easy part of the problem if you have control of your scene, and you may want some physiological adaption response in the loop too rather than just a constant ideal exposure, again possible.

Carmack claims some kind of workaround where he used the framebuffer as a texture and uses fragment arithmetic rather than framebuffer blends to accumulate (multiple passes I assume) to allow the use of an fp framebuffer. It seems like it was an experimental path. Any disappointment with fp framebuffers seem to stem from the lack of fp fragment blend operations.

[This message has been edited by dorbie (edited 01-30-2003).]

Silly? So how do you send a high precision framebuffer to a 10 bit ramdac?

It’s silly to expect it in this revision of hardware. The concept itself is fine, and ultimately will be implemented (either in ATi’s r350 or the next real generation of GPU’s). Carmack doesn’t need to tell us what both us and the hardware developers already know, nor does he need to push hardware vendors to make features that are already on the drawing board.

It was a silly expectation because there are other uses for a floating point buffer as well. The lack of HDR-capable RAMDACs isn’t sufficient to hold back a feature like fp buffers. The fact that fp buffers are the only thing that prevents Renderman Shaders from running fully in hardware (via multipass) alone is reason enough to have them, even missing some hardware to make them even more useful.

[This message has been edited by Korval (edited 01-30-2003).]

P.S. FWIW I think the internal vs post blending HDR comment refers to internal arithmetic that can scale > 1.0 prior to the final destination blend rather than blending in the framebuffer then boosting or using a LUT. I’d have thought this was a non issue since you could always spend a bit with the left shift approach for individual contributions. Maybe he’s talking about the result of successive accumulations of relatively dark contributions, that would start to suffer. Only a high precision framebuffer can solve it.

This could be closely related to the fp texture fragment blend he writes about, although one is experimental. Moving the multipass accumulation blends into a fragment path makes for some interesting possibilities if you have enough instructions & textures for that and your fragment shading.

[This message has been edited by dorbie (edited 01-30-2003).]

Korvall, I don’t think anyone would disagree with that, not even Carmack. It’s only his .plan dude.

Originally posted by V-man:
Thanks for the link zed. BTW, how to you find this .plan thing?

http://www.gamefinger.com


/edit/ Has anyone run across PC magazine. They have written that Doom3 uses DX9, … bla bla bla. Where did they get that from?

Maybe it does - for sound, input and perhaps even network. Or maybe it’s been said it runs (best) on DX9 class hardware?

I’m just happy that there is finally going to be a game that will benefit from my GF3 (And from Carmacks .plan it looks like it might even run reasonably… - now that I’ve finally upgraded from a PIII 733 )

If they have real support for 5.1 sound (unlike all the “EAX” games out there) then I’ll be as happy as a pig in …

Originally posted by rgpc:
Maybe it does - for sound, input and perhaps even network. Or maybe it’s been said it runs (best) on DX9 class hardware?

I think that’s a given – for Windows version, of course. Versions for other platforms will use the various platform specific non-rendering related features as well.

I doubt they will use directx for networking, since Q3’s networking works fine without it. They will probably continue to use sound and mouse input from directx, though you only need like DX3 or maybe DX5 level of support for that.