PDA

View Full Version : Official OpenGL mechanism for vertex/pixel "shaders"



timfoleysama
12-01-2000, 11:39 AM
nVidia and MS worked together on creating DX8 with its vertex and pixel "shader" architecture. This is new functionality that GL lacks. A GL extension for vertex programs was added by nVidia, and I expect that when NV20 comes out they will add an extension to access its per-pixel abilities (dependent texture reads and the like). This is, of course, good. We wouldn't want new hardware to come out without us being able to access it's features through GL.

In the long term, though, this kind of programmable pipeline will become more and more prevalent. I believe that eventually this kind of functionality will have to be folded into the official OpenGL specification. Either that or this is the time for an official break of a games-specific flavor of GL from the existing system. At this point taking advantage of vertex arrays, multitexture, complex blend modes and shading operations, and programmable per-vertex math leaves one writing code consisting only of extensions, it seems. This functionality should either be brought into GL proper, or should be spun off into a seperately evolving spec that consumer games cards would implement (a "gaming" subset, similar to the "imaging" subset).

Some questions:
Is the DX8 API for a programmable pipeline (and the corresponding shader language they have chosen) the "right" choice? What should happen in the next release/s of OpenGL to adapt it to the reality of T&L hardware and programmable GPUs?

mcraighead
12-01-2000, 12:05 PM
You can be absolutely confident that not only will all the DX8 features be exposed in OpenGL, but that we will in fact provide more and better features in OpenGL.

If you look at what DX8 provides today, almost everything that is truly "new" in it is still being emulated in SW. So, at this point, it really _doesn't_ offer anything fundamentally new. Many of the features in it have been around in OpenGL for a long time. Vertex streams look a lot like vertex arrays. 3D textures are in OpenGL 1.2. Pixel shaders are actually less powerful than register combiners. And so on.

Specifically, on the topic of vertex programs, we do feel that the API chosen by DX8 and in NV_vertex_program is the right API for this functionality.

The ARB is currently looking at programmable geometry. I can't comment further on the activities of the ARB, for a variety of reasons.

- Matt

MikeC
12-01-2000, 12:46 PM
Matt, a few related questions if I may:

1) NV were first off the block with OpenGL vertex programs, and were very influential in the design of DX8 - certainly far more so than any other vendor. Is this relatively vendor-specific design choice likely to hamper ARB standardization in this area?

2) Most of the recent GL extensions have focused on pipeline programmability, which is very low-level compared to the rest of OpenGL. Are there any efforts underway to balance this trend by providing simplified access to common applications of this programmability? Requiring programmers to reimplement the entire T&L pipeline (using a whole new pseudo-assembler language) to use new effects makes for great demos and cutting-edge games, but probably isn't going to win many converts among more mainstream users.

3) If and when the ARB does standardize on a programmable-pipeline scheme, can we assume that the pseudo-ASM language will also be standardized?

4) If you can't comment on ARB progress, can you comment on why you can't comment on ARB progress? No meeting minutes have been published for quite a while now. Is it a case of IP worries (shadow of the Rambus fiasco), or what?

5) We've heard nothing for AGES about official MS support for 1.2, which implies that we can expect official support for a putative 1.3 sometime after the heat death of the Universe. Is there anything that can be done to bypass this MS bottleneck and get working (statically-linked) support for 1.2 and future versions?


I know some (all?) of these are fairly political, and I really don't want to put you on the spot in any way - if you can't or would prefer not to answer, that's fine. I think everyone on this board appreciates the effort you put in to keep us up to date. I'm just curious, and figured it couldn't hurt to ask.

j
12-01-2000, 04:07 PM
Pixel shaders are actually less powerful than register combiners.

Really?

Why would anybody want to make a "new" feature that is less powerful than something that has already been around for a while?

In what ways can register combiners outdo the DX8 pixel shaders?

j

mcraighead
12-01-2000, 07:05 PM
Pixel shaders vs. register combiners: pixel shaders are missing the signed range, the register combiners range remappings, an equivalent to the programmable final combiner, an AB+CD operation, and the mux operation. The "full" pixel shader spec has some extra features, but they are not supported by any hardware available today.

I don't see the lack of MS 1.2 support as being much of an issue. It's accessible as an extension and static linking would create compatibility issues (what happens if an OGL 1.2 app runs on a system with a 1.1 driver? you'll get an obscure error message, most likely). There is talk at the ARB of a WGL "replacement", but I think this would cause far more problems than it would fix.

I can't discuss anything related to the ARB discussion of programmable geometry. Far too many IP issues.

It's disappointing that the ARB hasn't posted meeting notes publicly lately, yes. I don't know what the deal is with this.

On the topic of low-level vs. high-level APIs, I firmly believe that we have been designing specs at "about the right level" of abstraction. On one hand we have people crying out for direct access to every single thing in the HW. On the other, we don't want to create legacy-nightmare extensions, and we need to make the features reasonably usable. Some specs are lower-level than others, but only in the places where we believe it's necessary -- and even then, we are often still providing significant abstraction from the underlying HW.

It's true that extensions will always add more API complexity. This is unavoidable. 3D will get more complicated, no matter what we do. The solution, I think, is that there will need to be more layers of API abstraction. You'll probably see more 3rd-party 3D libraries or engines where someone who specializes in OpenGL or D3D has already done this work. Clearly, the solution is not to add a glLoad3DStudioModelAndDisplayItWithBumpmaps command... but someone else _can_ provide that, if that's what people want.

- Matt

timfoleysama
12-02-2000, 12:28 AM
Matt -

Just one thing. Register combiners may be more powerful than pixel shaders, but they still don't expose the dependent texture read functionality - and I assume upcoming nVidia HW will support that too. Is there an existing extension that exposes dependent texture reads at the right level of abstraction?

Oh, and just one more "just one more thing." This is off the topic of the original post here but will the next-gen cards from NV that support 3D textures allow those textures to be paletted? I bought a Radeon for a research project I'm doing that required that functionality and then found out that ATI doesn't believe in paletted textures.

mcraighead
12-02-2000, 01:32 PM
Dependent texture reads are in DX8, and so we'll have them in OpenGL too.

3D textures: When we support 3D textures, we'll definitely also support paletted 3D textures. I don't know if the Radeon HW actually supports paletted textures at all.

Now, on a slightly related subject, when ATI put up their DX8 devrel material, I noticed something interesting about 3D textures on Radeon...
http://www.ati.com/na/pages/resource_centre/dev_rel/sdk/RadeonSDK/Html/Info/DirectX8.html

And I quote:

The RADEONô does not support multiresolution 3D textures (i.e. volume mip maps) or quadrilinear filtering.

Interesting that they haven't really felt much need to mention this up until recently. http://www.opengl.org/discussion_boards/ubb/smile.gif

- Matt

timfoleysama
12-03-2000, 11:42 AM
Yes, the Radeon seems to have paid a price in flexibility for being the first out of the gates with volume texturing.

I think the reason they can't do volume mipmaps probably has to do wtih the amount of filtering that would be involved in implementing MIPMAP_LINEAR for volumes. If I remember correctly the Radeon 3-texture pipeline is limited by the number of linear filters it can do. It can handle bilinear filtering on three textures, but if you turn on trilinear filtering for even one, then you can only do two simultaneous textures (albeit with trilinear on both). Since a volume texture already uses trilinear for regular sampling (wheras a 2D texture uses it only for MIPMAP_LINEAR) I think that mipmap interpolation for even a single volume texture would go over their limit of six directions of linear interpolation. It seems that it would be best to have the texture units be truly orthogonal, so that the filtering in each may be chosen freely.

Fortunately, for my project, volume mipmapping is not required. Unfortunately paletted textures are absolutely critical...

mcraighead
12-03-2000, 06:34 PM
Yes, true LINEAR_MIPMAP_LINEAR support requires quadrilinear filtering. But plain old LINEAR filtering is trilinear with 3D textures, so I think the lack of mipmap support may not be related to filtering concerns.

ATI also doesn't advertise their trilinear restriction very openly... I believe they use two texture units to do trilinear, so you can do 1 trilinear and 1 bilinear but not 2 trilinear, and if this is correct, it actually constitutes a small cheat on their part -- similar to how the V5-5500 has been bashed in some circles for supporting only bilinear in combination w/ multitexture, causing its benchmarks to be slightly overstated.

Bringing you your daily dose of FUD,

- Matt

Humus
12-05-2000, 03:59 AM
Upon talks about card weaknesses, when will we see a nVidia card with near-Matrox image quality? Radeon is almost there, V5 not too far behind.

mcraighead
12-05-2000, 04:37 AM
I don't know what the deal is with all the various claims of 2D image quality I've seen.

I could rant for hours on how people on the web seem to know absolutely _nothing_ about image quality... (not that the average person, or even the above average person, on the average HW web site seems to know anything about 3D either, no matter how much 31337 insider information they claim to have)

For one, 2D and 3D image quality need to be _clearly_ separated. 3D image quality is the mapping from OpenGL/D3D commands to pixels in the framebuffer and can be measured objectively, while 2D image quality is the mapping from the pixels in the framebuffer to the image on the screen, and is generally very subjective.

In terms of 3D image quality, ever since the TNT, I think it's safe to say that we've been right up there at the top and that we've only improved since then.

For 2D image quality, I've heard so many contradictory stories that I don't know who to trust any more. I've heard everything from "my old Mystique II has better 2D image quality than my GeForce2" to "G400 is underrated, TNT2 was better and GF kills it." Since 2D image quality is a function of so many factors (video card, monitor, cable to monitor, resolution, color depth, refresh rate), and it's analog, I have a feeling that the confusion stems from people failing to do reasonable comparisons. Comparing video card X on monitor A with video card Y on monitor B tells you absolutely nothing about X, Y, A, or B whatsoever!

I've also seen all sorts of claims about how one card has brighter colors or somesuch, and so it's better. These are equally ridiculous. It's called "gamma", folks. If gamma isn't sufficient, well, with the number of controls on the average monitor and in our control panel put together, you have absolutely _no_ excuse to be complaining about the colors.

In the long run, I would imagine that the 2D image quality issue for video cards will go away entirely. Digital connectors will probably do the trick.

- Matt

Humus
12-05-2000, 10:55 AM
What I was mainly thinking about was the generally blurry output on high resolutions on most cards based on nVidia chipset. The difference between a G400 and a GeForce is rather big. Radeon is only slightly behind, while the GF's is in their own class below. Sure, many people don't have a problem with this, but this is one reason I have still never bought a nVidia product. The other reason is that nVidias webpage plainly sux. It only tells you how fantastic their card are, but all the marketing BS gets boring rather quickly. When one starts to wonder where the techical details about the card can be found you'll only find that it's not there. At least I've never found any kinds of feature list of the cards. Amazing that you must read reviews on the net to find those things out and not being able to find it at their own homepage.

mcraighead
12-06-2000, 04:44 AM
I'm not saying you're wrong about 2D image quality, but what I'm saying is that I've heard so many contradictory claims on the subject that I don't know who to believe any more.

I personally have never had a problem with our 2D quality. It's also not my department; the OpenGL driver has absolutely no impact on 2D image quality.

And I've talked to several people in person who say that the whole issue is very exaggerated and that they have tried G400 and Radeon and saw no difference whatsoever at any resolution.

I've also seen claims that many of the problems come from board vendors using a very cheap filter (i.e. components are not up to spec) on the VGA signal. I don't know if this is true.

- Matt

mcraighead
12-06-2000, 04:56 AM
Oh, and as for our web page, I was able to find a list of features in a few minutes:
http://www.nvidia.com/Products/GeForce2Go.nsf/features.html

And if you're expecting "the dirt" on our web page rather than info from marketing, well, I think it's safe to say that that isn't going to happen -- the whole _point_ is to present a positive outlook. If you want more technical info, that's what the developer pages are for. If that's not good enough, then either we (1) only have so much time or (2) decided that it is in our best interest to not publish that information. This is how all companies work, not just us...

I think it's also safe to say, from looking at our competitors' web pages, that they follow a fairly similar policy. Marketing info on the main pages, technical info on the developer pages, and only the info that they feel is fit to post.

- Matt

Humus
12-06-2000, 03:05 PM
Sure, most people don't have too sensitive eyes. But I've seen various nVidia cards in action. None have been satisfying in higher resolution. It may perhaps vary between vendors, but I've heard from many sources that the reference design (which most vendor build their cards from) is the problem.

The web page thing. That link wasn't exactly what I was looking for. I mean, that feature list isn't even half the length of the feature list of a V5. If I wonder if the GTS supports say anitsotropic filtering and to what degree. Where should I find that information?
It only tells you that the card are go damn great, but never tells you in what aspects. It's just the latest and greatest. 95% of the page i marketing BS, the other 5% is useful information.

Look at this exemplary ATi side: http://www.ati.com/na/pages/technology/hardware/radeon/techspecs.html
It's easily found on their site.
3dfx has a good and complete feature list, just aswell as Matrox.

timfoleysama
12-07-2000, 05:01 PM
Okay, the vendor-bashing here is far more off-topic than even my OT post. If you don't like nVidia cards, that's you own issue and nobody will make you buy one. The fact that the GeForce is pretty much the defacto standard at the moment means that a few people out there think nVidia cards are okay. Besides, issues like that don't belong on a developer forum they belong on a gaming site.

j
12-07-2000, 07:34 PM
Back to the original topic (sort of).

It seems to me that with programmers demanding more and more flexibility with vertex and pixel shaders, the industry is going to end up having pixel and vertex shaders with an almost unlimited number of instructions, texture fetches whenever the programmer wants from any texture unit, dozens of different machine instructions, their own set of registers to store data, and so on. Pixel shaders will end up doing almost any sort of calculation.

Sort of like a CPU does.

It seems that in a couple years, "all" a graphics card will be is a completely programmable hardware T&L unit, and a couple dozen general purpose pixel pipelines.

Would this simplify graphics chip designs? Instead of having to fit many different types of functionality on the chip, the chip could do it using the pixel shading programming language.

I find it sort of strange that graphics chips, which were diverging from CPU's in terms of special functions and such, might be evolving to become _more_ like a CPU.

What do you think?

j

kaber0111
12-07-2000, 09:45 PM
>web seem to know absolutely _nothing_ about
>image quality

*laughs*
it's funny cause it's true http://www.opengl.org/discussion_boards/ubb/wink.gif

yeah, a lot of ppl think pixel shaders are
holy grail, like i was talking on the phone in this game dev interview, and the guy was telling me pixel shaders are the bomb, etc..
haha,

dude didn't know much about them.
and the fact that there still is a bunch of time till they are supported in hardware, makes them totally useless to use... software is horribly slow.

yes combiners have a buttload of functionlity and for demo making they are the best way to take.
but many people can't really use OpenGL in production.

the reason?
cause some companies spit out **** drivers.
and thats' the bottom line.

imho Nvidia's version of OpenGL wipes the floor on the current hardware against *D3D, But the sad part is.
it only works on nvidia cards.
heh

*hoppe added some really good code into the DxRelease build/d3dx and i'm pretty sure a lot of ppl like the progresive meshing..

and then the other fact, that directX is backed by a company with lots of money.
and with D3D it's really s "Standard",

like i blasted mark an email cause i was really ticked off cause of all the ip/nvidia crap on the spec and check out the reply i got...

problems/urls.. http://www.angelfire.com/ab3/nobody/email_rebut.html

mark's reply http://www.angelfire.com/ab3/nobody/kilgard_rebut.html


and we are trying to standarized OpenGL right?
food for thought.

laterz,
akbar A.

timfoleysama
12-08-2000, 08:21 AM
I think that in the long term OpenGL is going to be on a slow road to obsolesence. The simple reason is that purely raster APIs like OpenGL existed in order to be a lowest common denominator that almost all hardware could support. The GL API is huge and often unnecesarily so, it seems. MS had the right idea with DX8, they introduced a far simplified model of the API. All vertex data is stored in Vertex Buffers, all transformations are encoded as Vertex Shaders and all shading is encoded as Pixel Shaders. In the best case all the data remains resident on the card, and the app only signals when to switch vertex data, textures or shaders. Not using calls that directly rasterize primitives gives huge speedup.

A truly modern API must embrace this style of programming, but OpenGL, I think, ends up bringing too much of its legacy along too. It will be interesting to see how GL weathers the introduction and wide acceptance of vertex and pixel shaders, and how it will evolve as rasterization APIs eventually give way to scene-graph APIs.

Humus
12-08-2000, 08:27 AM
Originally posted by timfoleysama:
Okay, the vendor-bashing here is far more off-topic than even my OT post. If you don't like nVidia cards, that's you own issue and nobody will make you buy one. The fact that the GeForce is pretty much the defacto standard at the moment means that a few people out there think nVidia cards are okay. Besides, issues like that don't belong on a developer forum they belong on a gaming site.

So, it's OK that mcraighead out of nothing gets the weaknesses of both Radeon and V5 into the discussion, but I better not talk about the GeForces weaknesses?
Sure, it's OT, but that was the filtering limitations of the V5 and Radeon too.

I was not bashing nVidia and I haven't said GF is a bad card. I said that the image quality didn't satisfy me and I asked when we would see this problem solved.
Also, image quality and feature belongs IMO more to developer forum than a gaming site. The average developer care much more about those two issues than the average gamer.

timfoleysama
12-08-2000, 08:33 AM
Further, on the subject of programmability in GPUs, I think that its obviously that someday the benefits of having a fully programmable general-purpose GPU will be clear, and the hardware manufacturers will start producing them. I think the cost in speed this would incur has not been outweighed by the benefits in flexibility. I assume that someday we will just see something like the removal of the limitation on number of operations in vertex programs and pixel shaders.

In the long run, though, it seems to me that once we have fully programmable GPUs, limiting them to just graphics will seem ludicrous. What if you could encode your physics algorithms as a "vertex program" and allow the card to process your physics for you? If we ever see floating-point framebuffers and a fully generalized pipeline, then I think that GPUs will become applicable to far more diverse problems than just graphics. At that point, though, it makes sense to move towards a new computer architecture in which a scalar CPU and a massively SIMD GPU work together as multiprocessors, with shared and exclusive memory for each. Programs would consist of machine code for both instruction sets designed to work in tandem. Or maybe this kind of massively parralel computation will become part of standard CPUs so that we will move back to the days when the CPU was responsible for all graphics work.

My logic for this is that as GPUs become more flexible we will want to apply them to more problems, and the CPU<=>GPU bus will continue to be the limitng factor, until the only suitable solution is to bring the two closer together...

j
12-08-2000, 08:46 AM
I agree.

It does seem that pretty soon GPU's will be very similar to a massively SIMD CPU with a couple special instructions added in.

Sort of sets the stage for a return to assembler programming, until compilers catch up with this.

If it happens, of course.

j

Cab
12-09-2000, 04:35 AM
Originally posted by j:
I agree.

It does seem that pretty soon GPU's will be very similar to a massively SIMD CPU with a couple special instructions added in.

Sort of sets the stage for a return to assembler programming, until compilers catch up with this.

If it happens, of course.

j

This is not something new. Don't you remember the TIGA graphics boards with the Texas Instrument 34010 chip (10 years ago)? It was a 2D graphics chips with an assembler and a C compiler. I made a special purpose CAD program for map engineering using it and it was funny and very powerful. I remember an example that comes with the compiler that allows you to draw triangles and other example using it that draws a non-textured 3D environment in real time at 1024x768 (it was 10 years ago so it was very impressive)

http://www.opengl.org/discussion_boards/ubb/smile.gif

mcraighead
12-09-2000, 09:28 AM
Omnibus reply follows...

I continue to insist, as before, that the whole "2D image quality" thing is something that I know very little about. I will repeat my previous statement:

*** begin quote from previous message*
I'm not saying you're wrong about 2D image quality, but what I'm saying is that I've heard so many contradictory claims on the subject that I don't know who to believe any more.
*** end quote from previous message*

For lists of features, our extensions document does document the set of extensions supported on each chip.

I'd also question the link to the ATI page you posted as being little more than dressed-up marketing material. The images in that document are all straight out of Radeon marketing slides/whitepapers. The document glosses over all sorts of inconvenient details and even makes misleading or false statements in a number of places. I could provide a large list of examples from just that document alone. If you want the real information (or an approximation thereof), again, you have to go to their developer pages.

On the IP issues: Yes, perhaps we're being overly cautious, but in this incredibly cutthroat industry, we have no choice. And as Mark said, it's standard practice, and we also have offered to license the extension on what we think are reasonable terms.

It's massively oversimplifying DX8 to claim that it reduces everything into vertex buffers, vertex shaders, and pixel shaders. First of all, I could just as well claim that in the future, OpenGL is going to be all about vertex arrays, vertex programs, and register combiners. The two statements are, in fact, almost equivalent! Secondly, both of them ignore the fact that there is still a lot of API functionality _outside_ those areas. Pixel shaders/register combiners totally ignore the backend fragment operations. The viewport transform, primitive assembly, clipping, and rasterization and interpolation are still quite alive. And so on.

In terms of future system architectures, I think the XBox is a good example of where things are headed. The XBox has three main chips: CPU, GPU, and MCP, with a unified memory architecture -- all the memory hangs off the GPU, so there's no dedicated system memory, video memory, or audio memory. The GPU and MCP are optimized for specific functions, while the CPU handles "everything else".

If you make the GPU too programmable, it becomes nothing more than a CPU. So programmability must be limited.

- Matt

j
12-09-2000, 06:37 PM
Don't you remember the TIGA graphics boards with the Texas Instrument 34010 chip (10 years ago)? It was a 2D graphics chips with an assembler and a C compiler.

No, I don't remember that. Maybe that's because I didn't even own a computer then.


If you make the GPU too programmable, it becomes nothing more than a CPU. So programmability must be limited.

I wasn't saying that I think a GPU should do what a CPU does.
What I am saying is that with seeemingly everybody asking for more and more flexibility, the assembly language used in the GPU pixel shaders might end up being similar to a CPU assembly language in the types of instructions they support.

Correct me if I'm wrong, but register combiners seem to be like the way people programmed a long time ago, on some of the first computers. Take the inputs, choose one of a couple available operations on them, and then output them; repeat for however many combiners you have. It's true that you have all the input scaling and biasing options, but you only have about 5 or 6 basic operations.

I'm not saying that I think register combiners are worthless, or that you can't do anything with them. And I don't think that the pixel pipeline should be completely user programmed.

But I do think that substituting a user writeable script in for the portion of the pipeline where the combiners are now could give an amazing amount of flexibility. Something like vertex programs.

j

mcraighead
12-10-2000, 07:04 AM
The combiners are essentially a VLIW instruction set, where you have to do scheduling yourself in your app. For example, you can schedule two RGB multiplies or dot products to occur in parallel by putting one in AB and one in CD. You can also schedule scalar and vector ops by putting the vectors in RGB and scalars in alpha. So in total, the engine can perform 2 vector and 2 scalar ops "per cycle" (for a _very_ loose definition of per cycle; don't try to read too far into my use of this term). The final combiner adds some extra power and complexity to the mix.

We could have used an interface similar to LoadProgramNV and written an instruction scheduler inside our driver, but when there are only 2 general combiner stages and the single final combiner, you could write a 3-instruction or 4-instruction program that we could have failed to schedule, yet on the other hand you could write a >10-instruction program that _would_ schedule into those combiners.

At some point in the future, I _would_ like to abstract away the combiners a bit. But at the time, it was the right interface, and I think it will still be the right interface for quite a while longer.

In the meantime, you could write an app-level library that would do this kind of scheduling.

- Matt

kaber0111
12-12-2000, 02:41 AM
yeah, combiners are pretty neat stuff.
i think it's time for me to actually write a demo where the combiners are doing a light pass, ala` no lightmap on actual level geomtry instead of standard/extened primitives..
see how that works out.

i wonder if i could pull of some neat toon rendering.
probably be quicker for high mesh stuff.
right now my current render just uses the sipmple intel technique.. http://www.angelfire.com/ab3/nobody/toonmodel1.jpg

but my light sources are messed ;/

dunno know, probably will try them both out (levelgeometry and cartoon) with combiner stages.

laterz,
akbar A.

kaber0111
12-12-2000, 11:42 PM
>At this point taking advantage of vertex arrays, multitexture, complex blend modes and shading operations, and programmable per-vertex math leaves one writing code >consisting only of extensions, it seems

another thing to note is.
these extensions and the extra code passes are very well worth it.

there is a big enough jump on the nvidia and ati cards, where if you don't support them your really missing out.

honestly, if your just in it to make or ship a game you don't have to worry, cause there still a few years till games will 'start' using some of the cooler features..
BUT, if you want to make cutting edge stuff, this is the only way..

example;
jason mitchell of ati was tellign me a lot of the developers really shy out when it comes to supporting some of the more compilicated/non trivial code passes..

laterz,
akbar A.

kaber0111
12-12-2000, 11:46 PM
>we do feel that the API chosen by DX8 and
>in NV_vertex_program is the right API for
>this functionality.

i remeber cass was plannign to organize a chat for the feature right after they came out on the opengladvanced list (on egroups).
but i don't think anyone got around to it...

are there any more papers/demos available beside the 75 page spec and the ppt?

laterz,
akbar A.

JasonM
12-13-2000, 05:20 PM
Matt wrote:



ATI also doesn't advertise their [volume texture] trilinear restriction very openly... I believe they use two texture units to do trilinear, so you can do 1 trilinear and 1 bilinear but not 2 trilinear, and if this is correct, it actually constitutes a small cheat on their part.


False. This is not correct for Radeon, nor was it correct on the Rage128, back when TNT was doing such a cheat.



Bringing you your daily dose of FUD


Please don't. There's enough to sort through.

-JasonM at ATI

gaby
12-14-2000, 06:09 AM
Donít be afraid by my bad English, Iím French !

I think that most programmers are waiting for tools that provide an API for advanced features. Why use opengl if we redo the engineering work from the bottom: we'd better use DX8. That's the question.

Y think that for a experimented programmer it should be easy to write an all in one extension, on top of opengl which provide similar features than DX8 ones. The goal of such tool is not to offer a function like open_3ds_and_display_this but bring_me_the_mathematics_and_knowledge_that_i_have nt_time_to_spend_for.
A such function cannot provide the most specific of each HW, but should be a right way to not have a big difference between 200 lines DX8 demos that provide per pixel shading with bump mapping and advanced vertex shading, and 5000 lines ogl one wich can only display basic rendering and contains huge mathematical concepts.

In other words, when will nVidia will provide the OpenGL SDK ?

We are a very small company. So, having a programmer learning for 6 month the right manner to reach the best effects on nVidia cards is too expensive. In few years, we should have the money to spend time for learn the core architecture of the HW, but at this time we prefer to reach the simplest way : jumping to DX8. What about independent programmers and students ?

So, the priority of a vendor seems to provide simplifications layer that permit to start with the most efficient feature (bump, per pixel, simple vertex shading) in very short time.

No ?

My opinion is not to said to nVidia and others to build a 3D engine, but to encapsulate her advanced knowledge in a library of specific functions and extensions.

Gabriel RABHI / Z-OXYDE / France

mcraighead
12-14-2000, 08:26 PM
Originally posted by gaby:
200 lines DX8 demos that provide per pixel shading with bump mapping and advanced vertex shading, and 5000 lines ogl one wich can only display basic rendering and contains huge mathematical concepts.

I think it's pretty clear, if you look at MS's DirectX samples, that it's usually the exact opposite. MS often does 5000-line samples to do what could be done in 200 lines in OGL (and probably 500 lines with D3D by a reasonably efficient coder).

- Matt

Olive
12-14-2000, 11:34 PM
Just a little contribution to this overheated thread (Can someone douse that fire on the icon http://www.opengl.org/discussion_boards/ubb/smile.gif ?).

It is true that developping extension specific code in OpenGL is a bit of a drag but, lets be honest, even if it seems like DirectX 8.0 has all the pixel and vertex shading capabilities in standard, its only hardware accelerated on a very small number of video chips on the market. Anyone seriously developping an app with DirectX 8.0 will first check to see if the hardware supports the feature and if not will use his own custom software alternative. Said differently, in OpenGL you check if the extension exists before using it whereas with DirectX you do it the other way around : is the feature accelerated ? In the end it turns out to be the same... except that you can expect with DirectX that in the long run every video card on the market will fully support its features. But who knows if the NV (or any card vendor for that matter) specific extensions will be supported by all or even turn out ARB (lets not even speak of being standard OpenGL) ??

kaber0111
12-15-2000, 03:32 PM
>honest, even if it seems like DirectX 8.0
>has all the pixel and vertex shading
>capabilities in standard, its only hardware >accelerated on a very small number of video >chips on the market

exactly.
that is why we should all use opengl extensions.
see this for more detail. www.angelfire.com/ab3/nobody/geforce.txt (http://www.angelfire.com/ab3/nobody/geforce.txt)

with d3d we have to wait for the cycle, that sucks.

laterz,
akbar A.