Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 15 12311 ... LastLast
Results 1 to 10 of 146

Thread: Talk about your applications.

  1. #1
    Member Regular Contributor
    Join Date
    Apr 2006
    Location
    Irvine CA
    Posts
    299

    Talk about your applications.

    Hello frequent OpenGL.org posters..

    I'd be interested to learn details about the kinds of OpenGL applications you work on. Motivation here is to get more insight about the features you need in OpenGL which can have a big effect on scheduling and priorities for upcoming releases.

    Kicking things off, here are the applications whose development I'm involved with at Blizzard - followed by some comments about features we need in OpenGL to further improve them.

    World of Warcraft (supports OpenGL on Windows and MacOS X)

    StarCraft II (OpenGL on OS X)

    Diablo III (OpenGL on OS X)

    Probably one of our highest priority needs for improvement in OpenGL is to streamline the interface for communicating with GLSL shaders, specifically the rate at which uniform values can be set en-masse, in situations encompassing many individually named uniforms, not suitable for packing into an array such as could be driven by the bindable-uniform extension.

    On the latter two titles, the engine can run in either ARB assembler mode or GLSL mode, the ARB path has a high performance API for updating uniforms in batches, this isn't the case yet for GLSL. Since we'd like to use the richer semantics of GLSL *and* run fast, this is one issue that is high priority for our games and not currently addressed by OpenGL 3.0.

    Another one not currently covered by GL3 is a compiled shader caching capability to help improve load times.

    GL3 and the new ARB extensions for 2.x do bring an improved feature set for games such as MapBufferRange, the enhanced ARB_framebuffer_object spec, instanced rendering, explicit support for one and two channel textures, and a few other things that we are looking forward to using as the implementations come along.

    (more items and details as I think of them)

    Looking forward to learning more about the apps the rest of you are involved with, and what we could do in post GL3 releases to help you improve them.

  2. #2
    Advanced Member Frequent Contributor cass's Avatar
    Join Date
    Feb 2000
    Location
    Austin, TX, USA
    Posts
    913

    Re: Talk about your applications.


    Rob, if you're content to use float4's for all input parameters, why wouldn't bindable-uniform be sufficient?

    Your program could just include a #define to convert the name you want to use in your shader to the element of the bindable array.

    This is what I've been planning to do, and I'm much happier with that approach than to try to negotiate tons of named variables with the GL driver.

    The app will know the offset into the buffer object for a named variable and updates to the named variable from app code just turn into glBufferSubData() calls. The shader just references at a hardcoded location in the bindable due to the define, but the code itself still looks like it's using a well-named variable.

    I don't see the hand-off getting much more efficient than that between the app and the driver/hardware.
    Cass Everitt -- cass@xyzw.us

  3. #3
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: Talk about your applications.

    I will post here later, when I get a free day. Busy working...

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Sep 2004
    Location
    Prombaatu
    Posts
    1,386

    Re: Talk about your applications.

    Speaking of bindables...

    Is there any hope for a user specified packing when the extension is made core?

    Failing that, I like your solution, Cass.

  5. #5
    Member Regular Contributor
    Join Date
    Aug 2003
    Posts
    261

    Re: Talk about your applications.

    Damnit I need to finish grad school so I can get an actual job programming in OpenGL. Its not fair to listen to all you guys talk about the bad@ss work you get to do :-(

  6. #6
    Member Regular Contributor
    Join Date
    Apr 2006
    Location
    Irvine CA
    Posts
    299

    Re: Talk about your applications.

    Don't forget, when you post in the thread, you should mention what apps you are working on (or maybe would like to work on).

  7. #7
    Senior Member OpenGL Pro Ilian Dinev's Avatar
    Join Date
    Jan 2008
    Location
    Watford, UK
    Posts
    1,290

    Re: Talk about your applications.

    I'm working on a 3D modeler, aimed for games/realtime-vis. (unlike how gamedevs always have to make conversion+preview+stuff tools for their artists). Drawing is all shader-based, using precompiled Cg or GLSL1.2 (the cgc.exe way), shader-model4. The artist creates/edits the shaders inside the modeler, and there is C++ code for the application-side for each shader (it takes care of setting uniforms, i.e a GetTickCount()&1023 controlling a UV-animated surface) . The C++ code has a call-table of entries to the rendering engine, and is recompiled+linked with GCC/whatever into a dll for use in the modeler, later simply included into a game-project.
    - Semantics (defining which resource a uniform/attrib/varying uses) are a-must.
    - Uploading of a whole range of uniforms is a-must.
    - bindable-uniforms are increasingly useful
    - texture-arrays are used
    - gl_Modelview** and other predefined uniforms are avoided.
    - instancing via an instance-specific vtx attribute-value is used

    I don't like GLSL1.3's in/out stuff, "varying" wasn't broken. (well, I can always make preprocessor macros to define how definitions look). I hope ATi let us use ASM shaders for their upcoming GS support, I don't like GLSL1.3 in its current state. (and I hate letting compilers/linkers decide where I should be sending data)

    VAO would be really nice to have.


    P.S. That modeler will be free or opensource, Blender sucks, all worthy modelers have too much useless-for-RT fluff and cost an arm and a leg for that.

  8. #8
    Junior Member Regular Contributor
    Join Date
    Aug 2007
    Location
    Adelaide, South Australia
    Posts
    206

    Re: Talk about your applications.

    I'm currently working on a game which makes extensive use of PhysX for animation, with lots of softbodies and cloth, in a very large outdoor world.
    The MAP_UNSYNCHRONIZED_BIT has been very handy for streaming the results of the PhysX simulation into a VBO circular buffer, however our implimentation using GL_NV_FENCE is more efficient and requires a smaller VBO than the one without, so:

    #1. Promote GL_NV_FENCE to core.

    But what would be even better (when NVIDIA add GPU processing to PhysX) would be to avoid the GPU-CPU-GPU copy and have PhysX store its output in an OpenCL style buffer on the GPU that can then be directly used as input to the OpenGL tesselator.

    #2. Shareable PhysX/OpenCL/OpenGL buffers.
    #3. Tesselation !!! (with distance related silhouette edge LOD)

    I agree with you on:
    #4. Compiled shader caching

    The biggest single problem i have at the moment is the lack of control over the rendering of secondary tasks like shadow maps or environment cube maps.
    I can always complete the rendering to the frame buffer before the VSync, however when i add the shadow map and cube map rendering it occasionally skips a frame.
    This is visually jarring, whereas updating the shadows or reflections every 2nd or 3rd frame is hardly noticable.
    So what i need is a second rendering stream (preferably in a separate Thread/Context) which acts like a lower priority background task and renders to a FBO.
    The GPU would render my main stream until it reaches the swapbuffers, at which point it switches to rendering the shadow and cube maps, then at the next VSync it swaps the buffers and continues with the main command stream.
    Hence the main rendering occurs every frame and the shadows & reflections are updated as often as possible.

    #5. Prioritised render contexts.
    #6. A query of whether the last swapbuffers had to skip a frame.

  9. #9
    Member Regular Contributor CrazyButcher's Avatar
    Join Date
    Jan 2004
    Location
    Germany
    Posts
    401

    Re: Talk about your applications.

    Primarily I work on a versatile 3d engine for games & applications. Focus on rapid tooling/prototyping for games & serious work, hence luxinia uses Lua mostly (core engine written in C). So far Windows only although Linux & Mac are intended as well.

    For the game use of the engine the focus is mostly for "mass use", where OpenGL clearly isn't a winner considering how problematic drivers are, and how certain features (draw_instanced, hfloat vertex,.. come now only on new hardware, while exposed on older hw longer for dx).

    Mostly missing the parameter uploads & precompilation as well. Wished ATI had at least gone to SM3 for the ARB programs like NV did, and Intel would have PBO support... Dont like GLSL, hence using Cg. Another wish would be IHVs submitting to "glview" and alike open databases, so we can see what limits/extensions are supported on hardware.
    I second the NV_fence to core suggestion. Something that can allow non-busy waiting would be nice. Although I am just beginning to explore the stream stuff.

    For research I work on virtual endoscopy (see you at IEEE Vis ) and other medical related rendering stuff (PhD). Hence I want to look further into CUDA for the raycasting, but so far I am fine with Cg & FBO. For rendering of vessel trees I will look into new ways as well (thanks for the reyes pipeline hint Mr. Farrar). Goal is that the rendering of a single effect doesn't saturate the high-end cards, but leaves room for implementation within a greater app. Daily clinic routine vs "academic visionary" I can ignore non-Nvidia hardware here pretty much so I am glad that up-to-date solutions for most features exist.

    in short:
    # "env" parameters for GLSL (in general GLSL seemed like a step-back from certain low level features of the PROGs)
    # precompilation
    # drivers also exposing appropriate functions for "older" generations, as they do support under dx9 (wishful thinking)
    # NV_fence
    # ecosystem wish: somesort of "official" glview/ former delphi3d.net database, which IVS commit latest driver limits to, so that its much more complete than the "user" based commits.

  10. #10
    Junior Member Regular Contributor Heiko's Avatar
    Join Date
    Aug 2008
    Location
    the Netherlands
    Posts
    170

    Re: Talk about your applications.

    First of all, I am still a beginner when it comes to programming OpenGL, so you might ignore this post if you think it is rubbish .

    Applications I work on:
    - scientific: fluid simulations/other physics/ai calculations possible to implement on the gpu.

    - games: for the moment, this is purely hobby, but depending on success developing them it might become a job.

    Currently I am starting a new project for a game, basically from scratch. I have been looking into depth peeling and I think it should be great to be able to check fragments against multiple depth buffers, so that the second closest fragment can easily be found. I am not sure this is a hardware limitation, or an API limitation. But it would be great if one was able to write/check multiple (not necessarily limited to two) depth buffers. As I said, I'm still a beginner in this field, so there might already be very reasonable alternatives for this. But so far I haven't found them.

    A second wish, which probably doesn't even has to be mentioned: put geometry shaders in the core, so we can rely on them being implemented in vendors drivers.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •