Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 10 of 22 FirstFirst ... 8910111220 ... LastLast
Results 91 to 100 of 212

Thread: Official feedback on OpenGL 3.2 thread

  1. #91
    Junior Member Regular Contributor
    Join Date
    Jan 2004
    Location
    Czech Republic, EU
    Posts
    190

    Re: Official feedback on OpenGL 3.2 thread

    It's not easy to get rid of alpha test. Hardware must support all the applications written using DX9 or GL1, where this feature is present and used extensively. Therefore, some hardware support is expected to be there for some time...

    BTW, ATI R6xx-R7xx cards (i.e. all recent ones) support alpha test too, see:
    Radeon R6xx/R7xx 3D Register Reference Guide, page 126
    Radeon R6xx/R7xx Acceleration, page 7, section 2.1.4
    (usually just hobbyist) OpenGL driver developer

  2. #92
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Official feedback on OpenGL 3.2 thread

    Larrabee won't.

    I get your point with alpha test. And personally, I would have kept both alpha test and quads. But the reason for removing them is to make it that much easier for future hardware that won't have this kind of fixed-function thing.

    OpenGL has always been an odd compromise between what happened before, what is now, and what things are trying to be in the future. That's why GL 1.3 use the in/out terminology rather than attributes and other stage-dependent terms.

    If some future hardware that doesn't support alpha test has to modify your shader every time you turned it on/off, you'd rather that it was embedded in the shader to begin with.

    Now personally, I would have done it by actually embedding it in the shader. That is, at link time, you can specify alpha test parameters. That way, it will always work like that, and the implementation can implement it in the most efficient way possible.

  3. #93
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Xmas
    Quote Originally Posted by Eric Lengyel
    All modern hardware still has explicit support for alpha testing that's independent of shaders.
    Not true.
    Why do you feel you are qualified to tell me that my statement is not true? Do you write drivers for Nvidia or AMD? We have now given you the actual hardware register numbers where alpha test is explicitly supported in the latest chips from both Nvidia and AMD, so you are obviously wrong. I know what I'm talking about, but you're just making claims that you can't back up.

  4. #94
    Senior Member OpenGL Guru
    Join Date
    Dec 2000
    Location
    Reutlingen, Germany
    Posts
    2,042

    Re: Official feedback on OpenGL 3.2 thread

    Eric calm down. The people who know you, know that you are right. But it's still a forum, so people with different knowledge come together and not everybody knows who you are and what you do and thus does not know, how serious to take your claims.

    Scribe really just wanted to help out and maybe xmas had some contradicting information from some source, too. I'm pretty sure they have been quite surprised by your harsh reply.

    Now back to business.

    Jan.
    GLIM - Immediate Mode Emulation for GL3

  5. #95
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    590

    Re: Official feedback on OpenGL 3.2 thread

    I have one comment aobut the alpha testing deal: considering that nvidia (and I imaginve ATI too) will support the compatibility profile you do get alpha test back in hardware, though ironically one has to use a compatibility profile to access a hardware feature.... I feel bad for the new hardware vendors when they do a desktop GL driver: they almost have two standards to deal with.

    as for killing of alpha test in GL3.x... my bet is that because it is not in GLES2, that might seem like an odd reason, but perhaps there is some long term dream goals of unifying all the GL's we have (below is oversimplified too):
    1. Desktop GL: 1.x/2.x, 3.x-core, 3.x-compatibile
    2. GLES: GLES 1.x, GLES 2.x
    3. Saftey Critical GL

    though I cannot imagine how would could ever get GL 3.x to play with GL SC.

  6. #96
    Intern Contributor
    Join Date
    Aug 2009
    Posts
    66

    Re: Official feedback on OpenGL 3.2 thread

    This post is added stuff on my previous post.
    Feetback on OpenGL 3.2


    Same problems about numbering count for the OpenGL ES family.
    (And other OpenGL stuff in general.)

    (Best practice also counts for standards in general.)

    Don't use the version numbers for the difference in functionality.
    What is going to happen if there is a need for OpenGL 1.y to get an major overhaul? Continuing numbering won't be possible. Using profiles would be the way to go here!

    OpenGL ES 2.y can stay the same.

    A new version of OpenGL ES, presumably 3.y could in a major revision get reduced functionality as OpenGL, but exact the same syntax. And would become a subset of OpenGL.

    Backwards-compatibility concerns?
    Drivers could provide OpenGL ES 2.y and 3.y side-by-side.
    Allowing legacy and new programs to run without problems.

    A programmer writing OpenGL-code could check if his code is also completely covered in OpenGL ES,
    making porting very easy if there is enough scope in ES.

    OpenGL ES 1.y would better get an extra tag:
    e.g. OpenGL ES FF (Fixed Functions) something like this.


    Hardware Vendors can be happy with this. Why?
    Because they can slap more stickers on their product and give the impression it works with more stuff, does more.

    e.g. A typical OpenGL graphic card would also get a sticker for OpenGL ES.

    e.g. A graphic card could be compliant for:
    OpenGL 3.y, OpenGL 4.y, OpenGL ES 3.y, WebGL and OpenCL

    That's a lot more of just OpenGL, WebGL and OpenCL.


    ----------------

    The binding system and the (bad/non/mis-use of the) numbering scheme are currently the two biggest problems of OpenGL.

  7. #97
    Advanced Member Frequent Contributor arekkusu's Avatar
    Join Date
    Nov 2003
    Posts
    781

    Re: Official feedback on OpenGL 3.2 thread

    Eric, Xmas's point is valid. Nvidia and AMD are not the only companies making "modern" hardware. Consider embedded hardware and ES 2.0.

  8. #98
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Official feedback on OpenGL 3.2 thread

    Eric, Xmas's point is valid. Nvidia and AMD are not the only companies making "modern" hardware. Consider embedded hardware and ES 2.0.
    True, but OpenGL does not run on the hardware that OpenGL ES is implemented on and vice-versa. The whole point of having two separate specifications is to allow each to best serve the needs of their clients.

    Alpha test is available on all desktop graphics hardware. Regular OpenGL is meant to serve desktop graphics hardware. Therefore, it should expose it. Alpha test may not be available on certain embedded systems hardware. OpenGL ES is meant to serve the needs of embedded systems. Thus, it makes sense for OpenGL ES to not support it.

    Joining OpenGL and OpenGL ES is a bad idea so long as the hardware they support have substantive differences. It's one thing to have API similarity, such that similar functionality on one works in the same way on the other. Sharing GLSL is an example. But limiting functionality on one because of the other is just a terrible idea.

  9. #99
    Member Regular Contributor
    Join Date
    Apr 2004
    Location
    UK
    Posts
    420

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Jan
    Good point!
    I hope to see something like AMD_vertex_shader_tessellator in core (or ARB) in the near future. In that case quads are the foundation for quadpatches and thus will certainly be included again, anyway.
    If this happens then it would have to be part of some greater Tesselation Shader type setup anyways; the AMD Tesselator and what is coming in D3D11 hardware are not the same thing, the currently exposed AMD extension is 1/3 of the functionality in terms of pipeline stages.

    As a side note; can anyone confirm GL3.1 support from AMD/ATI in the recent Cat drivers and if so on what OS?
    I'm currently unable to confirm this is the case using the Cat9.8 'beta' drivers, nor the Cat9.7 or Cat9.6 in Win7 x64 (GL Extension viewer reports 2.1 and 3.1 forward compatible, GL Caps Viewer gives 2.1) and others are saying that they have support for 3.1.

    (also, the text input box is HORRIBLY screwed when using IE8 on Win7, so much so once I got passed "what is coming in D3D11 hardware are not the same thing, the" I had to resort to finishing my post in notepad because the text box kept jumpping up and down as I typed.)

  10. #100
    Junior Member Newbie
    Join Date
    Nov 2007
    Posts
    22

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    Quote Originally Posted by Scribe
    this explains the removal of alpha blending, many people will use their own custom techniques in shaders that requires data to be passed differently etc. Alpha blending is easy to emulate in shaders and fully programmable hardware has no fixed support for this so there's no performance loss.
    You don't know what you're talking about, and you're speaking to a long-time OpenGL expert as if he's some ignorant newbie. (And you seem to have some confusion between alpha testing and alpha blending.) All modern hardware still has explicit support for alpha testing that's independent of shaders. For example, in the G80+ architecture, the alpha test is accessed through hardware command registers 0x12EC (enable), 0x1310 (reference value, floating-point), and 0x1314 (alpha function, OpenGL enumerant). There is a small decrease in performance if you use discard in simple shaders instead of using the alpha test. (Although for long shaders, there is sometimes an advantage to using discard instead of the alpha test because subsequent texture fetches can be suppressed for the fragment if there aren't any further texture fetches that depend on them.) I think it's a mistake to remove access to a hardware feature that actually exists and is useful.

    Quote Originally Posted by Scribe
    In regards to quads, again it's a pain for OS developers when from your point of view, you simply need to use GL_TRIANGLE_STRIP, maintaining the same number of vertices and will simply have to adjust slightly the ordering of these vertices for the strip to be drawn correctly.
    Again, you don't know what you're talking about. Triangle strips cannot be used to replace quads that aren't connected to each other.
    My apologies, I misread alpha testing as blending. Though on that note, there are many advantages in a likely production environment of using shader based alpha tests such as avoiding aliasing by allowing the fragment to be sampled etc. As you say, any potential performance losses are minimal in the worst case to performance gains in best case and there are quality gains to be had under certain situations. It's worth reminding everyone that Fixed-Function Alpha Testing was also removed in DirectX10 almost 3 years ago and is not required any more for hardware to be compliant. As such it is likely that future hardware will choose to drop fixed support in favor of room for an extra shader core (or something along those lines of thinking). Given this I think dropping support in OpenGL was the right move, it's allowing new developers to future-proof and standardise the way they handle alpha testing. On the other hand if you really want the extra performance from supporting hardware, this is exactly what the compatability support is for or the use of extensions on a 2.x context.

    In regards to QUADs, again I apologise, I did not realise that you were concerned from a tessellation/geometry perspective. Obviously for simple geometry and texturing they are equivalent. Perhaps when tessellation is further standardised and moved to core we'll see a new primitive like QUAD_PATCH or something. I can only suspect that keeping QUADs would have caused just a little confusion in regards to implementation as the way drivers handle these can vary and an extension that takes in QUADs as a set of connected vertices would require the data to be fed in in a specific manner? So again I would suggest this was perhaps done just for the sake of semantics and to limit implementation confusion where standardisation could have been difficult.

    In relation to Xmas' comment on hardware alpha testing, there are other cards by companies such as Via S3, Intel and SIS. Whilst it is possible that these companies' latest cards also implement alpha testing in hardware, this could also be implemented via software emulation. Perhaps this is what he was getting at?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •