Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 4 123 ... LastLast
Results 1 to 10 of 34

Thread: Official OpenGL mechanism for vertex/pixel "shaders"

  1. #1
    Intern Newbie
    Join Date
    Sep 2000
    Posts
    41

    Official OpenGL mechanism for vertex/pixel "shaders"

    nVidia and MS worked together on creating DX8 with its vertex and pixel "shader" architecture. This is new functionality that GL lacks. A GL extension for vertex programs was added by nVidia, and I expect that when NV20 comes out they will add an extension to access its per-pixel abilities (dependent texture reads and the like). This is, of course, good. We wouldn't want new hardware to come out without us being able to access it's features through GL.

    In the long term, though, this kind of programmable pipeline will become more and more prevalent. I believe that eventually this kind of functionality will have to be folded into the official OpenGL specification. Either that or this is the time for an official break of a games-specific flavor of GL from the existing system. At this point taking advantage of vertex arrays, multitexture, complex blend modes and shading operations, and programmable per-vertex math leaves one writing code consisting only of extensions, it seems. This functionality should either be brought into GL proper, or should be spun off into a seperately evolving spec that consumer games cards would implement (a "gaming" subset, similar to the "imaging" subset).

    Some questions:
    Is the DX8 API for a programmable pipeline (and the corresponding shader language they have chosen) the "right" choice? What should happen in the next release/s of OpenGL to adapt it to the reality of T&L hardware and programmable GPUs?

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Sep 2000
    Location
    Santa Clara, CA
    Posts
    1,096

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    You can be absolutely confident that not only will all the DX8 features be exposed in OpenGL, but that we will in fact provide more and better features in OpenGL.

    If you look at what DX8 provides today, almost everything that is truly "new" in it is still being emulated in SW. So, at this point, it really _doesn't_ offer anything fundamentally new. Many of the features in it have been around in OpenGL for a long time. Vertex streams look a lot like vertex arrays. 3D textures are in OpenGL 1.2. Pixel shaders are actually less powerful than register combiners. And so on.

    Specifically, on the topic of vertex programs, we do feel that the API chosen by DX8 and in NV_vertex_program is the right API for this functionality.

    The ARB is currently looking at programmable geometry. I can't comment further on the activities of the ARB, for a variety of reasons.

    - Matt

  3. #3
    Advanced Member Frequent Contributor
    Join Date
    Feb 2000
    Location
    London
    Posts
    503

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    Matt, a few related questions if I may:

    1) NV were first off the block with OpenGL vertex programs, and were very influential in the design of DX8 - certainly far more so than any other vendor. Is this relatively vendor-specific design choice likely to hamper ARB standardization in this area?

    2) Most of the recent GL extensions have focused on pipeline programmability, which is very low-level compared to the rest of OpenGL. Are there any efforts underway to balance this trend by providing simplified access to common applications of this programmability? Requiring programmers to reimplement the entire T&L pipeline (using a whole new pseudo-assembler language) to use new effects makes for great demos and cutting-edge games, but probably isn't going to win many converts among more mainstream users.

    3) If and when the ARB does standardize on a programmable-pipeline scheme, can we assume that the pseudo-ASM language will also be standardized?

    4) If you can't comment on ARB progress, can you comment on why you can't comment on ARB progress? No meeting minutes have been published for quite a while now. Is it a case of IP worries (shadow of the Rambus fiasco), or what?

    5) We've heard nothing for AGES about official MS support for 1.2, which implies that we can expect official support for a putative 1.3 sometime after the heat death of the Universe. Is there anything that can be done to bypass this MS bottleneck and get working (statically-linked) support for 1.2 and future versions?


    I know some (all?) of these are fairly political, and I really don't want to put you on the spot in any way - if you can't or would prefer not to answer, that's fine. I think everyone on this board appreciates the effort you put in to keep us up to date. I'm just curious, and figured it couldn't hurt to ask.

  4. #4
    Member Regular Contributor
    Join Date
    Jun 2000
    Location
    B.C., Canada
    Posts
    367

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    Pixel shaders are actually less powerful than register combiners.
    Really?

    Why would anybody want to make a "new" feature that is less powerful than something that has already been around for a while?

    In what ways can register combiners outdo the DX8 pixel shaders?

    j

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Sep 2000
    Location
    Santa Clara, CA
    Posts
    1,096

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    Pixel shaders vs. register combiners: pixel shaders are missing the signed range, the register combiners range remappings, an equivalent to the programmable final combiner, an AB+CD operation, and the mux operation. The "full" pixel shader spec has some extra features, but they are not supported by any hardware available today.

    I don't see the lack of MS 1.2 support as being much of an issue. It's accessible as an extension and static linking would create compatibility issues (what happens if an OGL 1.2 app runs on a system with a 1.1 driver? you'll get an obscure error message, most likely). There is talk at the ARB of a WGL "replacement", but I think this would cause far more problems than it would fix.

    I can't discuss anything related to the ARB discussion of programmable geometry. Far too many IP issues.

    It's disappointing that the ARB hasn't posted meeting notes publicly lately, yes. I don't know what the deal is with this.

    On the topic of low-level vs. high-level APIs, I firmly believe that we have been designing specs at "about the right level" of abstraction. On one hand we have people crying out for direct access to every single thing in the HW. On the other, we don't want to create legacy-nightmare extensions, and we need to make the features reasonably usable. Some specs are lower-level than others, but only in the places where we believe it's necessary -- and even then, we are often still providing significant abstraction from the underlying HW.

    It's true that extensions will always add more API complexity. This is unavoidable. 3D will get more complicated, no matter what we do. The solution, I think, is that there will need to be more layers of API abstraction. You'll probably see more 3rd-party 3D libraries or engines where someone who specializes in OpenGL or D3D has already done this work. Clearly, the solution is not to add a glLoad3DStudioModelAndDisplayItWithBumpmaps command... but someone else _can_ provide that, if that's what people want.

    - Matt

  6. #6
    Intern Newbie
    Join Date
    Sep 2000
    Posts
    41

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    Matt -

    Just one thing. Register combiners may be more powerful than pixel shaders, but they still don't expose the dependent texture read functionality - and I assume upcoming nVidia HW will support that too. Is there an existing extension that exposes dependent texture reads at the right level of abstraction?

    Oh, and just one more "just one more thing." This is off the topic of the original post here but will the next-gen cards from NV that support 3D textures allow those textures to be paletted? I bought a Radeon for a research project I'm doing that required that functionality and then found out that ATI doesn't believe in paletted textures.

  7. #7
    Senior Member OpenGL Pro
    Join Date
    Sep 2000
    Location
    Santa Clara, CA
    Posts
    1,096

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    Dependent texture reads are in DX8, and so we'll have them in OpenGL too.

    3D textures: When we support 3D textures, we'll definitely also support paletted 3D textures. I don't know if the Radeon HW actually supports paletted textures at all.

    Now, on a slightly related subject, when ATI put up their DX8 devrel material, I noticed something interesting about 3D textures on Radeon...
    http://www.ati.com/na/pages/resource.../DirectX8.html

    And I quote:

    The RADEON™ does not support multiresolution 3D textures (i.e. volume mip maps) or quadrilinear filtering.

    Interesting that they haven't really felt much need to mention this up until recently.

    - Matt

  8. #8
    Intern Newbie
    Join Date
    Sep 2000
    Posts
    41

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    Yes, the Radeon seems to have paid a price in flexibility for being the first out of the gates with volume texturing.

    I think the reason they can't do volume mipmaps probably has to do wtih the amount of filtering that would be involved in implementing MIPMAP_LINEAR for volumes. If I remember correctly the Radeon 3-texture pipeline is limited by the number of linear filters it can do. It can handle bilinear filtering on three textures, but if you turn on trilinear filtering for even one, then you can only do two simultaneous textures (albeit with trilinear on both). Since a volume texture already uses trilinear for regular sampling (wheras a 2D texture uses it only for MIPMAP_LINEAR) I think that mipmap interpolation for even a single volume texture would go over their limit of six directions of linear interpolation. It seems that it would be best to have the texture units be truly orthogonal, so that the filtering in each may be chosen freely.

    Fortunately, for my project, volume mipmapping is not required. Unfortunately paletted textures are absolutely critical...

  9. #9
    Senior Member OpenGL Pro
    Join Date
    Sep 2000
    Location
    Santa Clara, CA
    Posts
    1,096

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    Yes, true LINEAR_MIPMAP_LINEAR support requires quadrilinear filtering. But plain old LINEAR filtering is trilinear with 3D textures, so I think the lack of mipmap support may not be related to filtering concerns.

    ATI also doesn't advertise their trilinear restriction very openly... I believe they use two texture units to do trilinear, so you can do 1 trilinear and 1 bilinear but not 2 trilinear, and if this is correct, it actually constitutes a small cheat on their part -- similar to how the V5-5500 has been bashed in some circles for supporting only bilinear in combination w/ multitexture, causing its benchmarks to be slightly overstated.

    Bringing you your daily dose of FUD,

    - Matt

  10. #10
    Senior Member OpenGL Guru Humus's Avatar
    Join Date
    Mar 2000
    Location
    Stockholm, Sweden
    Posts
    2,345

    Re: Official OpenGL mechanism for vertex/pixel "shaders"

    Upon talks about card weaknesses, when will we see a nVidia card with near-Matrox image quality? Radeon is almost there, V5 not too far behind.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •