Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 11 of 22 FirstFirst ... 91011121321 ... LastLast
Results 101 to 110 of 212

Thread: Official feedback on OpenGL 3.2 thread

  1. #101
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    Quote Originally Posted by Xmas
    Quote Originally Posted by Eric Lengyel
    All modern hardware still has explicit support for alpha testing that's independent of shaders.
    Not true.
    Why do you feel you are qualified to tell me that my statement is not true? Do you write drivers for Nvidia or AMD? We have now given you the actual hardware register numbers where alpha test is explicitly supported in the latest chips from both Nvidia and AMD, so you are obviously wrong. I know what I'm talking about, but you're just making claims that you can't back up.
    Ouch! To Xmas, Eric isn't a noob and he knows his stuff. Maybe a bit more clarification on your part is needed...

    I agree with Eric, IMO until we have a Alpha shader or whatever I would like to see alpha testing/blending kept around...

  2. #102
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by bobvodka
    Quote Originally Posted by Jan
    Good point!
    I hope to see something like AMD_vertex_shader_tessellator in core (or ARB) in the near future. In that case quads are the foundation for quadpatches and thus will certainly be included again, anyway.
    If this happens then it would have to be part of some greater Tesselation Shader type setup anyways; the AMD Tesselator and what is coming in D3D11 hardware are not the same thing, the currently exposed AMD extension is 1/3 of the functionality in terms of pipeline stages.

    As a side note; can anyone confirm GL3.1 support from AMD/ATI in the recent Cat drivers and if so on what OS?
    I'm currently unable to confirm this is the case using the Cat9.8 'beta' drivers, nor the Cat9.7 or Cat9.6 in Win7 x64 (GL Extension viewer reports 2.1 and 3.1 forward compatible, GL Caps Viewer gives 2.1) and others are saying that they have support for 3.1.

    (also, the text input box is HORRIBLY screwed when using IE8 on Win7, so much so once I got passed "what is coming in D3D11 hardware are not the same thing, the" I had to resort to finishing my post in notepad because the text box kept jumpping up and down as I typed.)
    Yes it's coming on Aug 12th as stated by AMD... Figured you wouldn't care as you are in DX land now. Phantom!

  3. #103
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Khronos_webmaster
    Would folks be interested in purchasing a pre-printed and plasticized Quick Reference Card? If so how much would be considered the right price.

    PDF would remain free to download course.
    $10 I would be game. If they are like these
    http://www.barcharts.com/Products/La...rence?CID=1224

    I would be also, but I would like to see another page added that states the depreciated functions or states and what one would or should look or use instead to get the same results on GL3+. I don't want to look up every old school way of doing something. It would be nice to glance at this card and say, ok I need to upload my Matrices to the vertex shader now instead. This is more for newbies and us GL2 coders that haven't kept up with all the GL3 features... Hence the price you pay. So I will pay if need be.

    Thanks

  4. #104
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985

    Re: Official feedback on OpenGL 3.2 thread

    Hi,

    I had to check out all the specs that got out lately (not just talking about GL spec but also extensions and hardware related stuff), so I come up with my future OpenGL release wish-list a bit late but I would like all you folks to comment on it if you think there's something to be added or you just don't feel that these are the most important things.

    So enough from the chit-chat, here's the list:

    ARB_atomic_operation (fictive)
    I know everybody is talking about EXT_direct_state_access, but that extension is a bit weird and it's also written against OpenGL 2.1 so it wouldn't be a good idea to accept it in this form. This should be something similar but it should be a forward looking extension and should have a more clean design.

    ARB_tesselation_shader (fictive)
    This should be something similar to AMD_vertex_shader_tesselator, but that extension is also a bit crappy. One of the main point for a new extension for this is ARB_geometry_shader4 which is (IMO unfortunately) in core with OpenGL 3.2. I don't see any benefit in geometry shaders over tesselation, but I think it was introduced to expose NVIDIA's G80+ early tesselation hardware possibilities. Anyway if it's already in core we'll need a tesselation extension which can interact with geometry shaders, otherwise the API would be a bit confusing not allowing to have tesselation and geometry shaders at the same time. As it appears in the AMD_vertex_shader_tesselator extension there was an issue about whether introduce a new shader in the pipeline to replace vertex unpack or to modify vertex shader functionality. I think introducing a new so called tesselator shader before vertex shader would be a design that fits much better in the already existing API. Anyway, I think this should be a MUST for OpenGL 3.3 because ATI's hardware already supports it since the HD2000 series and it's sad that till now no graphics API provided a mechanism to expose this hardware capability (not even DX).

    ARB_instanced_arrays
    This extension is already provided by ATI for a long time and it's a quite useful feature from point of view of performance optimization and also as a room for new functionality in vertex shaders. Unfortunately I don't know if NVIDIA has hardware support for such mechanism.

    EXT_timer_query
    Another nice extension, this time provided by NVIDIA for years. It not just nicely fits into the already existing query API, but also opens a huge room for application developers to optimize their rendering and to easily identify bottlenecks. As far as I know there is no hardware limitation in ATI that would prevent them from implementing this if it would be core.

    EXT_texture_swizzle
    As many of you already mentioned, to replace some needed stuff as a result of the deprecation model, we would have to have something like this extension. It would also reduce the number of shaders needed to accomplish a certain series of operations. It's supported by both NVIDIA and ATI so I think there shouldn't be any reason why not to include in OpenGL 3.3.

    ARB_texture_cube_map_array
    For cube map textures associated with meshes to nicely with into a texture array based renderer, this is a MUST. I don't think that I need any further explanation.

    AMD_texture_texture4/GL_ARB_texture_gather
    This is actually an extension that provides custom filtering possibilities, especially 4xPCF. As I know this is a feature introduced to
    DX with version 10.1. If OpenGL wants to keep up with DX then this extension is also a MUST for the next release. As far as I see it is a plan for Khronos as well.

    ARB_blend_shader (fictive)
    This would be a new shader that would replace the alpha blending mechanism. I think it would be a much better idea to have a separate shader for this purpose than extending ARB_draw_buffers with EXT_draw_buffers2 and ARB_draw_buffers_blend. Of course this is just my optinion and it also doesn't really expose any new hardware functionality, but hey! this is my wish-list not an order-list

    ARB_gpu_association (fictive)
    Based on AMD_gpu_association and NV_gpu_affinity there should an OpenGL provided mechanism to specify which GPU we would like to address with a specific command. Maybe the best way to put it into the API would be to have some GPU objects + different command queues for them. Anyway there is a big work to do with such an extension so it cannot be expected in the near future.

    There should be also a new and clean API for handling texture objects because the old one stinks because it was designed for a different purpose that it is actually used nowadays in GL3+. I know that this would be a very big change so I don't expect it till e.g. OpenGL 4 or something like that, I just want to emphasize that there is a big need for such thing in the future (maybe it can be introduced with the fictive ARB_atomic_operation extension that I presented above).
    For the new design I would expect that texture filtering and wrap modes won't be part of the texture object, instead those will be moved to the scope of shaders, so the texture fetching functions in GLSL will accept filtering and wrap mode related parameters. I think this would fit much better to the design of the API and also how hardware is/should evolve.
    I think the guys at Khronos are working on something like this as well and for example that's why they don't put EXT_texture_filter_anisotropic into core because it uses the old way how things are done (and is also crappy IMO).

    Even if I was a bit offensive in my post, as a final conclusion I would like to emphasize that I'm strongly committed to OpenGL and I also strongly appreciate the way how the guys are doing their job at Khronos nowadays because I really see that with such a schedule and pace OpenGL will not just keep up with DX but also can expose new hardware features in the future even sooner than it's rival.

    Thanks for all of you and keep up the good work!
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  5. #105
    Member Regular Contributor
    Join Date
    Apr 2004
    Location
    UK
    Posts
    420

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Mars_999
    Yes it's coming on Aug 12th as stated by AMD... Figured you wouldn't care as you are in DX land now. Phantom!
    I care about putting out information/opinions which are based on the facts at hand; I claimed AMD didn't have a working GL3.1 driver right now, it was claimed by others they did, a claim I couldn't confirm with my own testing.

    Thus far the claims are;
    - AMD have a working 3.1 context which I can't confirm
    - AMD will have a working 3.1 context by Aug 12th (wednesday).

    I'll be intrested to see what happens wednesday as right now I'm jumpping on any new releases in the hope they will fix some other issues. (I suspect that, regardless of cost or power requirements my next card will be an NV one; after years of no ATI driver problems they have started to go south, might as well see what the other side is like when DX11 hardware appears).

  6. #106
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by bobvodka
    Quote Originally Posted by Mars_999
    Yes it's coming on Aug 12th as stated by AMD... Figured you wouldn't care as you are in DX land now. Phantom!
    I care about putting out information/opinions which are based on the facts at hand; I claimed AMD didn't have a working GL3.1 driver right now, it was claimed by others they did, a claim I couldn't confirm with my own testing.

    Thus far the claims are;
    - AMD have a working 3.1 context which I can't confirm
    - AMD will have a working 3.1 context by Aug 12th (wednesday).

    I'll be intrested to see what happens wednesday as right now I'm jumpping on any new releases in the hope they will fix some other issues. (I suspect that, regardless of cost or power requirements my next card will be an NV one; after years of no ATI driver problems they have started to go south, might as well see what the other side is like when DX11 hardware appears).
    Well Aug 12th came from the horse's mouth... So yeah we'll see.

    I would wait for the new GF300 series cards out this fall/winter. DX11 and from the leaked specs OMG! 6x better performance than a GTX 280 card!!! Can't wait to see if that is the case. BTW the new card is rumored to cost $500!! or more for top of the line and they are supposed to have various other levels of cards to accommodate the cheaper crowd.

    BTW I heard you got a Core i7 920... How's that treating you? What about compile times? Are they any faster? I am in the market for a new machine and may wait until Lynnfield is out on Sep 6th... or build a Core Duo Quad on the cheap if Core i7 isn't worth it.


  7. #107
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Official feedback on OpenGL 3.2 thread

    ARB_instanced_arrays
    This extension is already provided by ATI for a long time and it's a quite useful feature from point of view of performance optimization and also as a room for new functionality in vertex shaders. Unfortunately I don't know if NVIDIA has hardware support for such mechanism.
    This is an awful extension that should never be made core. ARB_draw_instanced is fundamentally superior and is already in the core.

    AMD_texture_texture4/GL_ARB_texture_gather
    This is actually an extension that provides custom filtering possibilities, especially 4xPCF. As I know this is a feature introduced to DX with version 10.1. If OpenGL wants to keep up with DX then this extension is also a MUST for the next release. As far as I see it is a plan for Khronos as well.
    NVIDIA hardware does not support this, so making it core would prevent them from providing a core implementation for that version.

    Thus far, all OpenGL 3.x core features work on the same kind of hardware. It is very important that the ARB maintains this.

    EXT_texture_swizzle
    As many of you already mentioned, to replace some needed stuff as a result of the deprecation model, we would have to have something like this extension. It would also reduce the number of shaders needed to accomplish a certain series of operations. It's supported by both NVIDIA and ATI so I think there shouldn't be any reason why not to include in OpenGL 3.3.
    This is not a good extension. Or, let me put it another way. The idea behind the extension is good; the particulars are not.

    Implementing this extension without dedicated hardware for it requires modifying the shader based on what textures are bound to it. There is already enough modifying of shaders based on uniforms and other such going on in drivers. We do not need to have extensions sanction this practice.

    ARB_texture_cube_map_array
    For cube map textures associated with meshes to nicely with into a texture array based renderer, this is a MUST. I don't think that I need any further explanation.
    Same problem as ARB_texture_gather.

  8. #108
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Jan
    Eric calm down. The people who know you, know that you are right. But it's still a forum, so people with different knowledge come together and not everybody knows who you are and what you do and thus does not know, how serious to take your claims.

    Scribe really just wanted to help out and maybe xmas had some contradicting information from some source, too. I'm pretty sure they have been quite surprised by your harsh reply.
    Sorry about being harsh. I'm just very frustrated with some of the design decisions being made in OpenGL 3, and that has put me on kind of a short fuse when it comes to people telling me that things I know to be true about graphics hardware aren't true. I realize that the people here were trying to be helpful, but they also need to have a realistic understanding of their knowledge level and refrain from stating unsubstantiated information in an authoritative manner. I asked for reasoning from the "insiders" who've posted in this thread, not for arbitrary speculation by people who aren't actually familiar with the silicon.

  9. #109
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Alfonse Reinheart
    EXT_texture_swizzle
    As many of you already mentioned, to replace some needed stuff as a result of the deprecation model, we would have to have something like this extension. It would also reduce the number of shaders needed to accomplish a certain series of operations. It's supported by both NVIDIA and ATI so I think there shouldn't be any reason why not to include in OpenGL 3.3.
    This is not a good extension. Or, let me put it another way. The idea behind the extension is good; the particulars are not.

    Implementing this extension without dedicated hardware for it requires modifying the shader based on what textures are bound to it. There is already enough modifying of shaders based on uniforms and other such going on in drivers. We do not need to have extensions sanction this practice.
    Fortunately, hardware support for this extension goes back a very long way on both NV and ATI chips. I have register layouts for this functionality for NV40+ and R400+, and earlier chips may support it as well.

  10. #110
    Senior Member OpenGL Guru
    Join Date
    Dec 2000
    Location
    Reutlingen, Germany
    Posts
    2,042

    Re: Official feedback on OpenGL 3.2 thread

    I think EXT_texture_swizzle is a good idea, and i don't know how that extension is "not good". How else could you do such a thing ?

    I am definitely happy to see luminance / alpha / intensity / whatever textures go away and replace it by one very clear extension.

    Jan.
    GLIM - Immediate Mode Emulation for GL3

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •