Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 13 of 22 FirstFirst ... 31112131415 ... LastLast
Results 121 to 130 of 212

Thread: Official feedback on OpenGL 3.2 thread

  1. #121
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Alfonse Reinheart
    I agree that ARB_draw_instanced is far more useful for instanced drawing but imagine the many other ways how you can use it. For example an attribute for every second triangle and so on...
    NVIDIA already removed hardware support for it from the G80 line. So there is really no reason to bring it into the core.
    OK! Maybe you're right but I think it is still useful as an optimization because sometimes you need per-triangle data or things like that and in such cases you either have to have redundant data or use things like buffer textures using primitive ID for lookup but I think that would hit performance a bit. Anyway, if NVIDIA had support for it maybe it's not a big deal to put it back again or just someone should come up with a similar stuff because in my rendering engine I would really be able to take advantage of it.

    Quote Originally Posted by Alfonse Reinheart
    NVIDIA hardware supports or will support it in the future because it's a DX 10.1 feature.
    I think you misunderstand something.

    The OpenGL 3.x core is all supported by a certain set of hardware. That hardware being G80 and above, and R600 and above. Core features for the 3.x line should not be added unless they are in this hardware range.

    The two extensions, texture_gather and cube_map_array are not available in NVIDIA's current hardware line. They will be some day, but not in this current line of hardware. And therefore, it is not proper for these to be core 3.x features; they should be core 4.x features.
    Hmm. I don't think so that new OpenGL versions shall include just already existing hardware functionalities but should also look forward, otherwise it will be always behind DX by releasing core functionalities only after the hardware for it is already out.

    Quote Originally Posted by Alfonse Reinheart
    Oh, and NVIDIA is not going to support DX10.1 unless it is in DX11-class hardware.
    Yes, but DX11-class hardware will support DX10.1 as well and they will be soon out, so it's time for OpenGL to support such stuff (like tesselation for example, even if ATI supports it for a long time ago).

    My vision about OpenGL's future is that the specification should be already out when hardware supporting it just appears. This can be achieved, because the ARB is a strong cooperation between vendors. Microsoft already achieved this why OpenGL shouldn't?

    I have concerns with the attitude of most people working with OpenGL, because they are NVIDIA supporters. Of course I know why is this so, because NVIDIA had always the best support for OpenGL. Maybe I'm the only one believing that ATI/AMD can also be an excellent choice with OpenGL. Anyway, if we just care about what NVIDIA supports and we don't care about at least the second big player in desktop 3D world, then OpenGL will just become NVIDIA's "proprietary" API.

    Off-topic, but two more points for ATI: they have quite good drivers nowadays and they really have more pure horsepower what I really like when using heavy weight shaders.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  2. #122
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985

    Re: Official feedback on OpenGL 3.2 thread

    Oh, I forgot to mention a very simple, but handy use case of ARB_instanced_arrays:

    Think about using texture arrays for materials. How you can assign a particular material for each and every triangle/mesh? There are at least three choices:

    1. use the primitive ID or instance ID and map it somehow to a layer index to use for addressing the texture array (the mapping it's at least a bit expensive)

    2. have the material ID in the VBO for each and every vertex (at least triples the data, redundant and not optimal)

    3. use an attribute divisor of e.g. 3 using ARB_instanced_arrays and voila

    I think obviously the third option is the best. If you know better ideas please share it, I'm really interested because I need some stuff like this.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  3. #123
    Senior Member OpenGL Pro Ilian Dinev's Avatar
    Join Date
    Jan 2008
    Location
    Watford, UK
    Posts
    1,290

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by aqnuep
    3. use an attribute divisor of e.g. 3 using ARB_instanced_arrays and voila

    I think obviously the third option is the best. If you know better ideas please share it, I'm really interested because I need some stuff like this.
    If you're using an IBO, with a "flat" vtx-attribute you could bring-down the vtx-count down to num-triangles. Still, if triangles don't share vertices, it'll again make vtx-count = num-tris*3 . I really liked the divisor, and am sad to hear it's dropped from GL3-class silicon; but the primitiveID/instanceID are obviously greater. (you can do more powerful division and resource-fetch inside the shader).

  4. #124
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Official feedback on OpenGL 3.2 thread

    don't think so that new OpenGL versions shall include just already existing hardware functionalities but should also look forward, otherwise it will be always behind DX by releasing core functionalities only after the hardware for it is already out.
    Looking forward would be releasing an OpenGL 4.0 specification now, similar to how Microsoft has released DX11 API documentation. Notice that Microsoft did not release DX10.2; it is DX11.

    When making major hardware changes, you bump the major version numbers. OpenGL 3.x should not promote to core features that do not exist in 3.x-class hardware.

    Yes, but DX11-class hardware will support DX10.1 as well and they will be soon out, so it's time for OpenGL to support such stuff (like tesselation for example, even if ATI supports it for a long time ago).
    OpenGL core should support those things exactly and only when they make version 4.0. As pointed out previously, 3.x is for a particular level of hardware; 4.0 is for a higher level of hardware.

    Breaking this model screws everything up. Take 3.2 for example. It includes GLSL 1.5, ARB_sync, and ARB_geometry_shader4 as part of the core, as well as the compatibility/core profile system. If you add on top of this a DX11 feature that can only be made available on DX11 hardware, then anyone using these features on standard 3.x hardware has to use them as extensions of 3.1, not core features of 3.2.

    No. 4.0 should be where DX11 features are supported, not 3.2.

    Think about using texture arrays for materials. How you can assign a particular material for each and every triangle/mesh?
    Why would you? Outside of some form of gimmick, I can't think of a reason why you would need this.

  5. #125
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Xmas
    I'm not taking offense. However, if you want to only talk about hardware from AMD and NVidia I'd suggest you say so instead of making sweeping statements about "all modern hardware".
    Please give an example of a GPU currently in production that supports OpenGL 3, but does not have alpha test hardware.

  6. #126
    Member Regular Contributor
    Join Date
    Apr 2006
    Location
    Irvine CA
    Posts
    299

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    Quote Originally Posted by Xmas
    I'm not taking offense. However, if you want to only talk about hardware from AMD and NVidia I'd suggest you say so instead of making sweeping statements about "all modern hardware".
    Please give an example of a GPU currently in production that supports OpenGL 3, but does not have alpha test hardware.
    For extra credit, name one that isn't supported by the ARB_compatibility extension, which would preserve that feature among many others..

  7. #127
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Alfonse Reinheart
    Think about using texture arrays for materials. How you can assign a particular material for each and every triangle/mesh?
    Why would you? Outside of some form of gimmick, I can't think of a reason why you would need this.
    We actually allow this kind of material assignment for the voxel terrain system in the C4 Engine. The user is able to paint materials onto the terrain at voxel granularity, so the shader ends up needing to be able to fetch textures based on per-polygon material data. Array textures on SM4 hardware makes this somewhat easier, but we also have to maintain a fallback on older hardware that stuffs a bunch of separate textures into a single 2D palette texture. Unfortunately, array textures are currently broken under ATI drivers if you use S3TC, so we only get to use them on Nvidia hardware for now.

    Quote Originally Posted by aqnuep
    Off-topic, but two more points for ATI: they have quite good drivers nowadays
    I would agree that the ATI drivers are much better than they have been in recent times, but I wouldn't go as far as calling them "quite good" yet. The problem with compressed array textures I mention above is only one of several open bug reports that we have on file with ATI right now.

    Another particularly annoying bug is that changing the vertex program without changing the fragment program results in the driver not properly configuring the hardware to fetch the appropriate attribute arrays for the vertex program. This forces us to always re-bind the fragment program on ATI hardware. You can see this bug for yourself by downloading the following test case with source:

    http://www.terathon.com/c4engine/ATIGLTest.zip

    Under correctly working drivers, you'll see a red quad and a green quad. Under broken drivers, you'll see only a red quad.

    An apparently related bug is that any program environment parameters that you've set for a fragment program are erased whenever you change the vertex program.

    The TXB instruction also does not function correctly in fragment programs.

    ATI still has a way to go before they catch up to Nvidia's stability.

  8. #128
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: Official feedback on OpenGL 3.2 thread



    Hey Eric, are you using glGenerateMipmaps()? or GL_GENERATE_MIPMAP?

    I found this same bug on Nvidia awhile back and it took awhile even after talking with Pat @ Nvidia to get it working on the current drivers you see now with texture arrays and compression. It was a mess I would get white textures IIRC or black... can't remember anyway I think I had to use GL_GENERATE_MIPMAP to get around it though....

  9. #129
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Mars_999
    Hey Eric, are you using glGenerateMipmaps()? or GL_GENERATE_MIPMAP?
    Neither. We generate the mipmaps off-line and store them in resources.

    Here's a test app (with source) for the array texture bug:

    http://www.terathon.com/c4engine/TextureArrayTest.zip

    Under working drivers, you'll see a quad on the screen that is half red and half green. With broken drivers, you'll just see black.

  10. #130
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    600

    Re: Official feedback on OpenGL 3.2 thread

    In reply to Jan's question about Qt and GL 3.x context creation:

    You won't like the answer: creating a GL context is hidden under Qt's covers (as the implementation is different for MS-Windows vs X-Windows), and drum roll please: does not use the new entry points to create a GL context. You cannot even request it to do it that way either. If you are into this kind of thing open up Qt's source code, src/opengl/qgl_x11.cpp and you can find the context creation stuff, and LOOK nothing about the new context creation method.

    but it gets richer: bits of the Qt source for OpenGL use the ARB interface for shaders (atleast Qt's shader API is using the 2.0+ interface).

    and the FBO stuff is like a laughing matter to, so, sighs, crapped up. Qt for desktop only checks for GL_EXT_framebuffer_object and uses those entry points for its qglframebufferobject API, epic fail.


    to get a GL 3.x context under Qt: use an nVidia driver since it will return a 3.x compatibility context/profile with the old entry point; ATI from last I _heard_ was more strict about generating a 3.x context, it insisted on going though the new context creation method to return a GL 3.x context.

    That was prolly not a nice answer though.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •