Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 9 of 22 FirstFirst ... 789101119 ... LastLast
Results 81 to 90 of 212

Thread: Official feedback on OpenGL 3.2 thread

  1. #81
    Junior Member Newbie
    Join Date
    Dec 2003
    Posts
    5

    Re: Official feedback on OpenGL 3.2 thread

    Why having it in pdf ... pleeease - put in html- on opengl website.

    As html I can read it in browser, cross navigate linkz, use +/- to simply zoom in out, bookmark pages, have several open in same window (at least in opera browser) ... and did I said easy follow links? I know I can follow links from acrobat reader, but it is so pain in the a*s,and I can't have two pages open at same time (unless they are following on each other). I really can't understand why ppl are monkeying about pdfs when html is so much more user friendly? About paper ... it is so waste on environment - realize it. In few months again you will wish new spec ... bläh. Not to mention that such literature is best read when you need it - and that is when you code ... You need something to do in your bed - get yourself a wife *coff coff* /* just a joke - hope you don't mind :-) */

  2. #82
    Advanced Member Frequent Contributor plasmonster's Avatar
    Join Date
    Mar 2004
    Posts
    739

    Re: Official feedback on OpenGL 3.2 thread

    Thanks Khronos for a really nice update!

    Agree with Eric and Jan - quads will be sorely missed.

  3. #83
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Official feedback on OpenGL 3.2 thread

    Why having it in pdf ... pleeease - put in html- on opengl website.
    You can't style HTML like that PDF. It also wouldn't be anywhere near as printable.

  4. #84
    Junior Member Newbie
    Join Date
    Nov 2007
    Posts
    22

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    I understand why most of the deprecated features in OpenGL 3.x were deprecated, but there are a few that don't make sense to me. Could someone on the inside please explain why the following features were ripped out? The only plausible explanation seems to be that these features aren't in DX10, so they were dropped from OGL3 just for parity. I know that nothing has actually been removed from OGL3 -- I'm just looking for the reasons that came up when the ARB decided to put the following items on the deprecated list.

    1) Alpha test. Yes, we can put kill/discard statements in our fragment shaders, but alpha test has been in hardware forever and is faster than adding shader instructions. This feature also presents a negligible burden on driver developers.

    2) Quads. These are extremely useful for all kinds of things, and they're supported directly by the setup hardware. Yes, I know the hardware splits quads into two triangles internally, but being able to specify the GL_QUADS primitive saves us from either (a) having to add two more vertices to the four needed for each quad, or (b) adding an unnecessary index array to the vertex data for a list of disjoint quads. This feature is also trivial for driver writers to implement.

    3) Alpha, luminance, luminance/alpha, and intensity formats. These are very useful for specifying 1-2 channel textures! But the most important reason to keep these is that the hardware has remapping circuitry in the texture units that's independent of the shader units, so a shader doesn't have to be modified in order to work with an RGBA texture or a LA texture. If these formats can't be used, then two separate shaders would be necessary: one that operates on an RGBA sample, and another that reads an R or RG sample and then swizzles/smears to get the proper result.
    I'll attempt to answer some of this... basically openGL 3.x is geared towards ShaderModel 4.0 and above hardware. It is not here to cater to old hardware with non-programmable components. As you suggest, this will be to keep up with the performance of DirectX and to allow the API to better model itself around a programmable pipeline. For old hardware you'll just have to continue using openGL 2.1 with extensions, which makes sense really, you're not exactly losing out.

    Taking the above into account, this explains the removal of alpha blending, many people will use their own custom techniques in shaders that requires data to be passed differently etc. Alpha blending is easy to emulate in shaders and fully programmable hardware has no fixed support for this so there's no performance loss. Given this knowledge, it's a pain for developers of new hardware/os's to have to implement something like alpha blending that will simply make life easy for 10% of their coders in order to achieve openGL certification. If OS developers see this as a pain, they wont adopt openGL and that's bad for everyone.

    In regards to quads, again it's a pain for OS developers when from your point of view, you simply need to use GL_TRIANGLE_STRIP, maintaining the same number of vertices and will simply have to adjust slightly the ordering of these vertices for the strip to be drawn correctly.

    In regards to texture formats, yes, texture formats are quite simple to implement, maybe this was jumping the gun but as others have said there are still plenty of extensions or alternatives that could be used in place of those dropped.

    Hopefully this explains a bit of the considerations that may have been involved in dropping these features.

  5. #85
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Scribe
    this explains the removal of alpha blending, many people will use their own custom techniques in shaders that requires data to be passed differently etc. Alpha blending is easy to emulate in shaders and fully programmable hardware has no fixed support for this so there's no performance loss.
    You don't know what you're talking about, and you're speaking to a long-time OpenGL expert as if he's some ignorant newbie. (And you seem to have some confusion between alpha testing and alpha blending.) All modern hardware still has explicit support for alpha testing that's independent of shaders. For example, in the G80+ architecture, the alpha test is accessed through hardware command registers 0x12EC (enable), 0x1310 (reference value, floating-point), and 0x1314 (alpha function, OpenGL enumerant). There is a small decrease in performance if you use discard in simple shaders instead of using the alpha test. (Although for long shaders, there is sometimes an advantage to using discard instead of the alpha test because subsequent texture fetches can be suppressed for the fragment if there aren't any further texture fetches that depend on them.) I think it's a mistake to remove access to a hardware feature that actually exists and is useful.

    Quote Originally Posted by Scribe
    In regards to quads, again it's a pain for OS developers when from your point of view, you simply need to use GL_TRIANGLE_STRIP, maintaining the same number of vertices and will simply have to adjust slightly the ordering of these vertices for the strip to be drawn correctly.
    Again, you don't know what you're talking about. Triangle strips cannot be used to replace quads that aren't connected to each other.

  6. #86
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by mfort
    Quote Originally Posted by Eric Lengyel
    3) Alpha, luminance, luminance/alpha, and intensity formats. These are very useful for specifying 1-2 channel textures! But the most important reason to keep these is that the hardware has remapping circuitry in the texture units that's independent of the shader units, so a shader doesn't have to be modified in order to work with an RGBA texture or a LA texture. If these formats can't be used, then two separate shaders would be necessary: one that operates on an RGBA sample, and another that reads an R or RG sample and then swizzles/smears to get the proper result.
    Solution to this is using R or RG textures with GL_EXT_texture_swizzle.
    IMO, once this ext. is in core we don't need I,IA,A textures.
    Would be nice to promote this extension to core one day.
    I agree. With the GL_EXT_texture_swizzle extension, the I, LA, and A formats are no longer necessary. But as you pointed out, this extension is not a core feature, so the problem I described still exists in OGL3. The proper solution would be to deprecate those texture formats *and* put the GL_EXT_texture_swizzle functionality in the core to avoid losing useful functionality.

  7. #87
    Junior Member Regular Contributor
    Join Date
    Aug 2007
    Location
    Adelaide, South Australia
    Posts
    206

    Re: Official feedback on OpenGL 3.2 thread


    AMD_vertex_shader_tessellator and the DX11 tesselator both support the tesselation of quad patches.
    But the core profile does not allow Quads.
    So does that mean that vendors are not allowed to provide a tesselation extension for the core profile ?

  8. #88
    Senior Member OpenGL Guru
    Join Date
    Dec 2000
    Location
    Reutlingen, Germany
    Posts
    2,042

    Re: Official feedback on OpenGL 3.2 thread

    Good point!

    I hope to see something like AMD_vertex_shader_tessellator in core (or ARB) in the near future. In that case quads are the foundation for quadpatches and thus will certainly be included again, anyway.

    Jan.
    GLIM - Immediate Mode Emulation for GL3

  9. #89
    Intern Contributor
    Join Date
    Aug 2009
    Posts
    66

    Re: Official feedback on OpenGL 3.2 thread

    The new version is good news for OpenGL.
    And I love the new features.

    But still there are stuff that OpenGL is missing/can do better in my opinion.
    These are suggestions for improving OpenGL.
    They are ideas and thoughts, not more.

    Please add tessellation in the next version of OpenGL.
    And quads, quad patches.

    Remove the binding system, it's a horrible thing.
    Add/Enable Atomic operations.



    Use the version numbers API Oriented!
    What do I mean? Well let's explain:

    Normally the first number, major version number is about big change, rewrite, incompatibility.
    The second is much new features and bug fixes.
    The third a bug fix, minor increment.
    The fourth a build number.

    x.y.z.b

    When there are API-additions increment y.
    Only when incompatibility should arise, collect as much as possible stuff and make a big leap, increment x.
    Remove as much deprecated functionality as possible.


    Following this logic, OpenGL would be:
    OpenGL 3.2 => OpenGL 2.4
    Which is quite different from the current version scheme.

    ARB is currently using profiles for this, which is not a good idea because everything stays in the specification.
    Thereby growing the specification without streamlining it.
    The deprecation mechanism is a good improvement over having nothing. But there needs to be a clear cut off.

    Dump the compatibility profiles, get rid of them.
    I have nothing against profiles in general.
    They just don't work for this problem.
    We just need to work with versions.

    What about the legacy stuff, compatibility?
    The specification should use major versions for this.
    The major versions could be implemented side-by-side by drivers to provide compatibility versions.
    The versions that are available are the drivers creators responsibility.

    Every application asks a certain OpenGL version.
    There by asks a certain context to run on/in.
    The driver can check it's versions and execute the programs with the right version.
    The support depends on the driver level, not the specification profiles any more.

    Deprecated features don't have to be in a compatibility profile of the newest version. Because the driver can implement an older version on which the applications run without complaining. The user and applications won't notice a thing about removed deprecated features.

    This allows the specification to remove deprecated functionality completely in major newer versions while keeping backwards-compatibility!

    The specification could state for backwards compatibility, that the earlier OpenGL versions can be added in the driver and contain a link, point to that specification.

    This does a much better job than a compatibility profile, doesn't it?


    For the current situation, a good version roadmap would be to continue the 3.y line.
    Add the version use in the 2.x specification.
    And start drafting a 4.y line with a new, revamped, clean and lean API.
    (Let 4 spent a long, very long time in the development, draft process to iron everything out very good.)
    The 4.y line can get:

    - Cleaner API:
    In the 4.y line remove deprecated features, no compatibility profile there.
    Because adding the 3.y context + compatibility profile in the drivers would be the compatibility profile/mode, version for 4.y.


    - Leaner API:

    e.g. (There are without a doubt a lot of other and, or bigger such improvements possible to the API.) currently there is a command for cube maps and a command for seamless cube maps.
    In OpenGL 4.y there would only be one command for seamless cube maps (That would be the shortest name, now it has a longer name.) And if only really necessary the command cube maps with seams should get added with a longer name.
    Thus discouraging developers to use that.


    What does everybody think of this idea to use major versions to provide backwards-compatibility AND API-cleanup?

  10. #90
    Member Regular Contributor
    Join Date
    May 2001
    Posts
    348

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    All modern hardware still has explicit support for alpha testing that's independent of shaders.
    Not true. But even if you had explicit support for alpha test in specific hardware, you could extract the necessary information from the shader code during compilation and convert a discard into fixed-function alpha test where it makes sense.

    As hardware is becoming more and more programmable it is a good idea to get rid of state which might require the driver to modify shaders on-the-fly. Ideally the driver would know all state which may affect shader execution ahead of time, so that it can all be compiled into shader code.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •