Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 12 of 22 FirstFirst ... 21011121314 ... LastLast
Results 111 to 120 of 212

Thread: Official feedback on OpenGL 3.2 thread

  1. #111
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Scribe
    As you say, any potential performance losses are minimal in the worst case to performance gains in best case.
    I should mention that there is at least one case in which alpha test hardware is very important: when rendering a shadow map with alpha-tested geometry. Since the shader used for this is almost always very short (one cycle on a lot of hardware), adding a KIL instruction to the shader will cut performance by as much as 50%. The equivalent alpha test is free.

    Also, since the alpha test must be supported by the driver for the foreseeable future, I don't think IHVs are going to drop hardware support for it any time soon. It's not difficult to implement in hardware, and it would be silly to burden the driver with recompiling a shader just because the alpha test was enabled/disabled or the alpha function was changed.

  2. #112
    Super Moderator Frequent Contributor Groovounet's Avatar
    Join Date
    Jul 2004
    Posts
    934

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Alfonse Reinheart
    ARB_instanced_arrays
    This extension is already provided by ATI for a long time and it's a quite useful feature from point of view of performance optimization and also as a room for new functionality in vertex shaders. Unfortunately I don't know if NVIDIA has hardware support for such mechanism.
    This is an awful extension that should never be made core. ARB_draw_instanced is fundamentally superior and is already in the core.
    I can't remember my source so I'm going to say "I think" that ARB_draw_instanced is exposed on GeForce 6 and 7 but the "hardware feature" have been removed in GeForce 8 in favours of GL_ARB_draw_instanced.

  3. #113
    Super Moderator Frequent Contributor Groovounet's Avatar
    Join Date
    Jul 2004
    Posts
    934

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Alfonse Reinheart
    Eric, Xmas's point is valid. Nvidia and AMD are not the only companies making "modern" hardware. Consider embedded hardware and ES 2.0.
    True, but OpenGL does not run on the hardware that OpenGL ES is implemented on and vice-versa. The whole point of having two separate specifications is to allow each to best serve the needs of their clients.
    Actually it does run on hardware that OpenGL ES is implemented and vice-versa. For PowerVR SGX we have drivers for OpenGL ES, OpenGL, Direct3D 9 and 10. I don't know exactly how public those drivers are, I guest it dependents on the platform where the chip is used. I'm not saying that drivers other than OpenGL ES are as feature complete ...

  4. #114
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Alfonse Reinheart
    ARB_instanced_arrays
    This extension is already provided by ATI for a long time and it's a quite useful feature from point of view of performance optimization and also as a room for new functionality in vertex shaders. Unfortunately I don't know if NVIDIA has hardware support for such mechanism.
    This is an awful extension that should never be made core. ARB_draw_instanced is fundamentally superior and is already in the core.
    I agree that ARB_draw_instanced is far more useful for instanced drawing but imagine the many other ways how you can use it. For example an attribute for every second triangle and so on...

    Quote Originally Posted by Alfonse Reinheart
    AMD_texture_texture4/GL_ARB_texture_gather
    This is actually an extension that provides custom filtering possibilities, especially 4xPCF. As I know this is a feature introduced to DX with version 10.1. If OpenGL wants to keep up with DX then this extension is also a MUST for the next release. As far as I see it is a plan for Khronos as well.
    NVIDIA hardware does not support this, so making it core would prevent them from providing a core implementation for that version.

    Thus far, all OpenGL 3.x core features work on the same kind of hardware. It is very important that the ARB maintains this.
    NVIDIA hardware supports or will support it in the future because it's a DX 10.1 feature. I don't think that NVIDIA's DX11 hardware won't support this feature as well.

    Quote Originally Posted by Alfonse Reinheart
    EXT_texture_swizzle
    As many of you already mentioned, to replace some needed stuff as a result of the deprecation model, we would have to have something like this extension. It would also reduce the number of shaders needed to accomplish a certain series of operations. It's supported by both NVIDIA and ATI so I think there shouldn't be any reason why not to include in OpenGL 3.3.
    This is not a good extension. Or, let me put it another way. The idea behind the extension is good; the particulars are not.

    Implementing this extension without dedicated hardware for it requires modifying the shader based on what textures are bound to it. There is already enough modifying of shaders based on uniforms and other such going on in drivers. We do not need to have extensions sanction this practice.
    I don't think it's not a good extension. Maybe if some new texture object mechanism is introduced then another extension should be introduced instead, but this does it's work and it's already in drivers.

    Quote Originally Posted by Alfonse Reinheart
    ARB_texture_cube_map_array
    For cube map textures associated with meshes to nicely with into a texture array based renderer, this is a MUST. I don't think that I need any further explanation.
    Same problem as ARB_texture_gather.
    Again, if NVIDIA would like to keep up with ATI in new features, they should also support this as well. Anyway, it's not probable that anything will prevent NVIDIA to adopt it in the future.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  5. #115
    Member Regular Contributor
    Join Date
    May 2001
    Posts
    348

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    Sorry about being harsh. I'm just very frustrated with some of the design decisions being made in OpenGL 3, and that has put me on kind of a short fuse when it comes to people telling me that things I know to be true about graphics hardware aren't true. I realize that the people here were trying to be helpful, but they also need to have a realistic understanding of their knowledge level and refrain from stating unsubstantiated information in an authoritative manner. I asked for reasoning from the "insiders" who've posted in this thread, not for arbitrary speculation by people who aren't actually familiar with the silicon.
    That should be fine then, as I am familiar with silicon and with the ARB as well.

    I'm not taking offense. However, if you want to only talk about hardware from AMD and NVidia I'd suggest you say so instead of making sweeping statements about "all modern hardware".

  6. #116
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Official feedback on OpenGL 3.2 thread

    Fortunately, hardware support for this extension goes back a very long way on both NV and ATI chips. I have register layouts for this functionality for NV40+ and R400+, and earlier chips may support it as well.
    Oh. Well, never mind then.

    Though it is still a concern for hardware that doesn't explicitly have such functionality.

    I agree that ARB_draw_instanced is far more useful for instanced drawing but imagine the many other ways how you can use it. For example an attribute for every second triangle and so on...
    NVIDIA already removed hardware support for it from the G80 line. So there is really no reason to bring it into the core.

    NVIDIA hardware supports or will support it in the future because it's a DX 10.1 feature.
    I think you misunderstand something.

    The OpenGL 3.x core is all supported by a certain set of hardware. That hardware being G80 and above, and R600 and above. Core features for the 3.x line should not be added unless they are in this hardware range.

    The two extensions, texture_gather and cube_map_array are not available in NVIDIA's current hardware line. They will be some day, but not in this current line of hardware. And therefore, it is not proper for these to be core 3.x features; they should be core 4.x features.

    Extensions are not evil. They serve a purpose.

    Oh, and NVIDIA is not going to support DX10.1 unless it is in DX11-class hardware.

  7. #117
    Administrator Regular Contributor
    Join Date
    Aug 2001
    Location
    NVIDIA, Fort Collins, CO, USA
    Posts
    184

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Mars_999
    Quote Originally Posted by Khronos_webmaster
    Would folks be interested in purchasing a pre-printed and plasticized Quick Reference Card? If so how much would be considered the right price.

    PDF would remain free to download course.
    $10 I would be game. If they are like these
    http://www.barcharts.com/Products/La...rence?CID=1224
    The reference cards we handed out at Siggraph last week were like the ones you linked to. Four letter sized pages, laminated, printed on both sides.


    but I would like to see another page added that states the depreciated functions or states and what one would or should look or use instead to get the same results on GL3+. I don't want to look up every old school way of doing something.
    Thanks
    The reference cards (including the PDF http://www.khronos.org/files/opengl-...rence-card.pdf) mark all functionality not in the core profile as blue. Thus you can already get this info from the reference card.

    Regards,
    Barthold
    (with my ARB hat on)

  8. #118
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    595

    Re: Official feedback on OpenGL 3.2 thread

    I just had one more odd thought of why killing alpha test in core profile can be a good thing: portability. Admittedly one might think it is of great utter stupidity to want one's 3D code to be portable between GLES2 and Desktop GL 3.x (limited) {i.e. only use what they have in common, but, ick that is not very true either FBO setup with regards to depth and stencil buffers is very different in practice} BUT, open up Qt. I don't like Qt, but it has a GLES2 drawing backend and a Desktop fixed function pipeline drawing backend {both suck actually in my eyes and the entire QPainter architecture needs some serious love}. Continuing this thought: one writes a simple 3D app and wish for it to run on embedded device or desktop; if one does not push the hardware at all on desktop and writes for shaders only carefully limited GL 2.1 (i.e. shader version 120, and avoiding most extensions unless they are proved by Qt wrappers) and a couple more icky ifs, then in theory the code ports between desktop and portable. Lots of ifs, but *cough* the Qt GL examples many of them are for both GLES2 and desktop GL.

    The above might seem like WTF, no one would do that, but that is where I have seen Qt going... right now it has some bonked in the head framebuffer wrappers that map to GLES2 or desktop GL API and exposes only what one can do in both (actually the stencil part for GLES2 needs a little tweaking to get it to work correctly under GLES2). sighs, and it is written using GL_EXT_framebuffer_object. it also has some bonked in the head shader wrappers, etc.

  9. #119
    Senior Member OpenGL Guru
    Join Date
    Dec 2000
    Location
    Reutlingen, Germany
    Posts
    2,042

    Re: Official feedback on OpenGL 3.2 thread

    While we are at Qt: Does anyone know, whether / how it is possible to create a GL 3.x context using Qt? I'm not a Qt master and all my tries and searches have failed.

    I'm pretty sure it is possible SOMEHOW (writing one's own qGLWidget replacement or such) but i would really need some pointers, how to do it.

    That's the real point holding me back to switch to GL 3.x, some of my tools use Qt, but all my applications share shaders, sooo not possible to mix and match.

    Jan.
    GLIM - Immediate Mode Emulation for GL3

  10. #120
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by barthold
    Quote Originally Posted by Mars_999
    Quote Originally Posted by Khronos_webmaster
    Would folks be interested in purchasing a pre-printed and plasticized Quick Reference Card? If so how much would be considered the right price.

    PDF would remain free to download course.
    $10 I would be game. If they are like these
    http://www.barcharts.com/Products/La...rence?CID=1224
    The reference cards we handed out at Siggraph last week were like the ones you linked to. Four letter sized pages, laminated, printed on both sides.


    but I would like to see another page added that states the depreciated functions or states and what one would or should look or use instead to get the same results on GL3+. I don't want to look up every old school way of doing something.
    Thanks
    The reference cards (including the PDF http://www.khronos.org/files/opengl-...rence-card.pdf) mark all functionality not in the core profile as blue. Thus you can already get this info from the reference card.

    Regards,
    Barthold
    (with my ARB hat on)
    Yeah I like the format so far, but I would really like to see a reference page to all the depreciated functions to what one needs to use with GL3.x, I have looked at the new "Red Book" ver. 7 and it doesn't cover these kind of heads up or discuss what one will have to use in its place. IMO this is going to be very handy to someone like myself moving from GL2.x to GL3.x. e.g. Matrix math will now be handled by YOU... So state this in the card and the user will need a good math lib or make there own... Stuff like this needs to be explained so one doesn't go where is glScale, glRotate, glTranslate at? and the countless other functions that are more shader specific, just state to be completed in a shader... I am just throwing out ideas here.

    Thanks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •