Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 4 of 173 FirstFirst ... 234561454104 ... LastLast
Results 31 to 40 of 1724

Thread: OpenGL 3 Updates

  1. #31
    Junior Member Regular Contributor
    Join Date
    Jan 2005
    Posts
    182

    Re: OpenGL 3 Updates

    Quote Originally Posted by JoeDoe
    As for me, I would like simple drawing mechanism via glBegin/glVertex2(3)/glEnd. It's very convenient way for drawing, for example, fullscreen quad with texture on it. I do not like an idea, that I must create a vertex buffer, fill it with an appropriate vertex data, and store this object as member in my classes. Or, for example, if I need to change a size of fullscreen quad, I must re-fill the buffer again and so on...

    Don't kill convenient glBegin/glEnd approach, only for convenience!
    There's no place for that in OpenGL any more. Convenience stuff can go into GLU (and I hope it will instead of just disappearing).

    Philipp

  2. #32
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: OpenGL 3 Updates

    I thought the legacy stuff like modelview and projection matrices, light parameters etc. would go away!
    They are. But glslang compilers can still recognize these bits of state, should the user of legacy glslang code wish to use that legacy state.

    What there won't be is actual context state for these. You have to provide the data as uniforms if you want to use them.

  3. #33
    Senior Member OpenGL Guru Humus's Avatar
    Join Date
    Mar 2000
    Location
    Stockholm, Sweden
    Posts
    2,345

    Re: OpenGL 3 Updates

    Quote Originally Posted by Korval
    No. It's much faster to have real alpha test than to do it in shaders.
    No, it's faster to kill fragments in the shader. I'm not aware of any reason why alpha test should be faster. Alpha test forces the shader to be executed in full. With discard it can be optimized to various degrees. If you issue the discard at the top of the shader the hardware could potentially early-out. I believe for instance GF 8000 series do this. ATI cards do run the entire shader, however, at least all texture lookups are killed for dead fragments which saves bandwidth. That's supported in the X1000 series and up IIRC (might be supported in the X800 generation too, don't remember exactly).

    It's worth noting that DX10 has removed alpha test. I think it's a good candidate for removal from OpenGL as well.

  4. #34
    Senior Member OpenGL Guru Humus's Avatar
    Join Date
    Mar 2000
    Location
    Stockholm, Sweden
    Posts
    2,345

    Re: OpenGL 3 Updates

    Quote Originally Posted by Korval
    Because it's redundant in the face of alpha test?
    Alpha test is the redundant feature. It's less flexible and offers no advantage of shader controlled fragment kills. With alpha test you burn an output channel that could be used for something useful. Alpha test is a legacy from the fixed function days.

    Quote Originally Posted by Korval
    You may as well say that you should do depth testing in shaders simply because you can.
    Except that kills z-optimizations and is incompatible with multisampling. There are no particular "alpha-test optimizations" that the hardware does, it's rather the case that alpha-test is more of a headache for the hardware.

    Quote Originally Posted by Korval
    That alpha test hardware is there, it's fast, and it doesn't require a shader conditional followed by a discard. There's no reason not to expose it.
    The conditional could be removed if OpenGL exposed a clip() equivalent that's available in DX, if the compiler isn't smart enough to convert your conditional to a clip() under the hood anyway. Btw, I disagree with the "fast" part. It's slow and provided for compatibility with legacy applications. It's a good candidate for removal from hardware as well.

  5. #35
    Super Moderator OpenGL Lord
    Join Date
    Dec 2003
    Location
    Grenoble - France
    Posts
    5,580

    Re: OpenGL 3 Updates

    Quote Originally Posted by PkK
    Quote Originally Posted by Khronos_webmaster
    [*]- S3TC is a required texture compression format
    I think that's a really bad idea if you want to release OpenGL 3 before 2017.
    S3TC is patented. It would be impossible to implement OpenGL without a patent licence.
    Example: This means that Mesa can't go do OpenGL 3 and thus all the Linux drivers based on it won't support OpenGL 3.

    If OpenGL 3 requires S3TC the free software community will have to create it's own 3D API instead of using OpenGL.

    Philipp
    Indeed, S3TC is a strange requirement. Who is actually using it by the way ? I am sincere.

  6. #36
    Junior Member Newbie
    Join Date
    Oct 2006
    Location
    Sweden
    Posts
    23

    Re: OpenGL 3 Updates

    Quote Originally Posted by ZbuffeR
    Indeed, S3TC is a strange requirement. Who is actually using it by the way ? I am sincere.
    Every major game out there for the last 5 years. The performance increase it offers over uncompressed textures is pretty dramatic, and as a bonus consumes less memory. The slight quality loss is very often worth it.

  7. #37
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: OpenGL 3 Updates

    Indeed, S3TC is a strange requirement.
    Not really.

    Take a good look at this forum. How many developers mistakenly look on extensions as something to be avoided where possible? They look at extensions as conditionally supported, even those that are commonly supported by virtually all implementations.

    S3TC is basic functionality, whether patented or not. It is expected of any hardware, and not having it be a core feature suggests to people that it is not widely available. This is not the image that GL 3.0 needs to project.

    Every major game out there for the last 5 years.
    Oh, I think longer than that.

    S3TC started showing up around the transition to the GeForce 2. And it was such a popular idea that Quake 3 was patched to allow for it. Even today, most games use S3TC where they can. Plus, S3TC artifacts aren't nearly as noticeable on higher-resolution textures.

  8. #38
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: OpenGL 3 Updates

    A format object has to be specified per texture attachment when a Program Environment Object is created. This helps minimize the shader re-compiles the driver might have to do when it discovers that the combination of shader and texture formats isn't natively supported by the hardware.
    Hmmm.

    Now, I understand perfectly well why this is done. Instances of programs (I really wish you would call them instances rather than "environment objects") may provoke recompilation based on what format of texture you use. And you don't want to allow for the possibility of provoking recompilation during times that aren't conceptually setup (like using a program instance rather than simply building one).

    That being said... there's got to be a better way. The problem here is simple: I have to create an entirely new instance if I want to switch a texture from RGBA32 to, say, S3TC. The problem is that things that don't make a difference with regard to program recompilation are being swept up with things that do.

    Maybe you could have it so that attaching a texture object/sampler pair to a program instance slot can fail. That way, you can mix and match formats so long as it isn't "too different", which is implementation defined.

  9. #39
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264

    Re: OpenGL 3 Updates

    Quote Originally Posted by Humus
    Quote Originally Posted by Korval
    Because it's redundant in the face of alpha test?
    Alpha test is the redundant feature. It's less flexible and offers no advantage of shader controlled fragment kills. With alpha test you burn an output channel that could be used for something useful. Alpha test is a legacy from the fixed function days.

    Quote Originally Posted by Korval
    You may as well say that you should do depth testing in shaders simply because you can.
    Except that kills z-optimizations and is incompatible with multisampling. There are no particular "alpha-test optimizations" that the hardware does, it's rather the case that alpha-test is more of a headache for the hardware.

    Quote Originally Posted by Korval
    That alpha test hardware is there, it's fast, and it doesn't require a shader conditional followed by a discard. There's no reason not to expose it.
    The conditional could be removed if OpenGL exposed a clip() equivalent that's available in DX, if the compiler isn't smart enough to convert your conditional to a clip() under the hood anyway. Btw, I disagree with the "fast" part. It's slow and provided for compatibility with legacy applications. It's a good candidate for removal from hardware as well.
    Alpha test and fragment kill is not the same.
    With alpha func, you have control over the function (AlphaFunc) and can provide a reference alpha.

    With discard, it depends on the GPU, but the first spec that came out (ARB_fragment_shader), a fragment was killed if a component was negative.

    Is alpha func actually a hw or is it simulated with shaders?
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

  10. #40
    Junior Member Newbie
    Join Date
    Jul 2005
    Posts
    3

    Re: OpenGL 3 Updates

    Why is DOUBLE being dropped? We know that future hardware will be able to support it, regardless. Isn't OpenGL supposed to be a forward looking standard?

    Will EXT framebuffer object be core?

    Nevertheless, I am looking forward to GL3, and see how much it parallels OpenGL ES 2.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •