Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 8 123 ... LastLast
Results 1 to 10 of 76

Thread: OpenGL Next via OpenCL

  1. #1
    Advanced Member Frequent Contributor cass's Avatar
    Join Date
    Feb 2000
    Location
    Austin, TX, USA
    Posts
    913

    OpenGL Next via OpenCL

    One of the reasons I was ok with what GL3 eventually became was that OpenGL did not need the distraction of a major refactoring of the existing graphics abstraction on the cusp of that abstraction becoming obsolete.

    I'll go out on a limb and say that we're well past the point of diminishing returns trying to make the existing OpenGL hardware abstraction support first class tessellation, order-independent transparency, global illumination, micropolygon rendering, virtualized texture. Even adding seemingly simple things like texture arrays and geometry shaders takes years.

    CUDA, OpenCL, and DirectX Compute all illustrate that the GPU is really coming into its own as a general purpose computing device. What's being mostly ignored is that graphics should be the killer app for the compute mode of these devices.

    CUDA and OpenCL essentially ignore graphics, OpenGL ignores compute, and DirectX Compute tries to bury a "general purpose" mode into the existing graphics abstraction. None of these seem to be a good fit for taking graphics forward in new and interesting ways on modern GPUs.

    The only effort right now that may be sniffing in the right direction is the Larrabee Native Interface, by changing the focus to a general compute device that can function efficiently as a GPU. But obviously this interface will not be an open standard, making it a non-starter for most developers.

    I propose that the most forward-looking and interesting direction for OpenGL Next is to define its function and implementation fully in terms of OpenCL.

    Said another way, if OpenGL Next cannot be efficiently implemented atop OpenCL, then I think Khronos will have missed a golden opportunity to set the right direction for open, portable graphics in the age of the GPGPU.



    Cass Everitt -- cass@xyzw.us

  2. #2
    Intern Newbie
    Join Date
    Oct 2007
    Location
    Toronto, Canada
    Posts
    40

    Re: OpenGL Next via OpenCL

    I'd like to see this too; it's only a matter of time before the "wheel of reincarnation" comes full-circle and the work done by specialized graphics hardware is once more folded back into the CPU.

    I imagine it's probably a number of years before this approach would be competitive enough to warrant writing commercial applications with such an API, but I'd certainly try using it for a personal project or two.
    The details are trivial and useless;
    The reasons, as always, purely human ones.

  3. #3
    Senior Member OpenGL Pro
    Join Date
    Sep 2004
    Location
    Prombaatu
    Posts
    1,386

    Re: OpenGL Next via OpenCL

    From the scant preliminary outline given in a Siggraph pdf I gather that GL and CL actually share resources, as the design is
    aimed to make interoperation between GL & CL efficient. It looks as though we end up with 2 different languages (GLSL and CL's C99+extensions), but the upshot is we get an offline compiler. The feeling here is definitely more GPGPU than graphics.

    Browsing the DX11 presentations from Gamefest I was struck by the new features in HLSL, interfaces and classes in particular - looks a lot like Cg ("subroutines", dynamic linking function thing). And aside from the now-familiar song and dance on tesselation, I was wowed by the new read-write buffers/textures, so-called "unordered resourse views" (then I had a good chuckle over the fact that Effects are back in D3DX). Looks like the compute shader in DX is an offshoot of the pixel shader, to lend itself initially to graphics post processing tasks, it appears.

    With that, I admit I was fit to tied with the next wave of technology around the corner, but you're absolutely right in the long view - that's exactly where it seems to be going. Seems like a great opportunity to get a jump start on an appealing inevitability.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Aug 2007
    Location
    USA
    Posts
    243

    Re: OpenGL Next via OpenCL

    I didn't realize the presentations were up already - nice!

    It looks like one can use the compute shader separately from the graphics pipeline (Dispatch()), but the pipeline has been updated to emit more general structures for use with the compute shader.

    EDIT: RWTexture2D !!! I am excited.

    Anywho, back on topic now...

  5. #5
    Advanced Member Frequent Contributor cass's Avatar
    Join Date
    Feb 2000
    Location
    Austin, TX, USA
    Posts
    913

    Re: OpenGL Next via OpenCL


    And in case it's not clear, I think this direction for OpenGL Next shouldn't make any effort to be backward compatible. People have certain expectations in naming though, so perhaps a name like OpenCL-G (the graphics library for OpenCL) would avoid compatibility battles from the get-go.

    Ultimately OpenCL-G should be the very modern, clean interface that LP proponents were craving, but the basis for that clean interface should come from a GPGPU abstraction, not a DX9-class-GPU one.

    The reason I think this direction is important is because it seems that graphics experts are ignoring this new upstart "compute mode" when their primary focus should be on making it the Right Way to do graphics.

    Today Compute is not a superset of Graphics, Compute is what needs to change to address that. Graphics experts, with the goal of making OpenCL-G, need to be a major driving force in OpenCL.

    It worries me that Intel seems to be the only company openly advocating this direction. Where are the other Khronos members?
    Do they have a different vision, and if so, what is it?
    Cass Everitt -- cass@xyzw.us

  6. #6
    Junior Member Regular Contributor
    Join Date
    Jul 2007
    Location
    Alexandria, VA
    Posts
    211

    Re: OpenGL Next via OpenCL

    There seems to be much going on at id concerning this direction. For others that haven't seen it: Possibly relevant link.

    This discussion, in my mind emphasizes how the ARB missed the boat when the GL2.0 rewrite was being considered, many years ago. Now, given the lack of LP, the ideal API will have to wait until a theoretical OpenCL-G comes about. However, the potential advantages of that approach are many.

  7. #7
    Member Regular Contributor
    Join Date
    Apr 2006
    Location
    Irvine CA
    Posts
    299

    Re: OpenGL Next via OpenCL

    You should understand that there is pretty significant overlap in company and individual staff participation between the CL and GL working groups. There's no "Chinese wall" between the two that I can perceive..

    It will probably be more interesting to talk about how CL could stand alone as a rendering facility once there are some 1.0 implementations actually out there to play with, so people can see what it can or can't do at that point. I would expect to continue to see GL evolve and track hardware (and interoperate with CL, very important).

  8. #8
    Advanced Member Frequent Contributor cass's Avatar
    Join Date
    Feb 2000
    Location
    Austin, TX, USA
    Posts
    913

    Re: OpenGL Next via OpenCL


    Hi Rob,

    I do understand that, however, observe that CUDA comes from the company with the best OpenGL implementation available, and it's clearly not a viable replacement for OpenGL. So overlap alone isn't sufficient for expecting good things to "just happen". It also must be a goal, and it's not a trivial goal that will just fall out of the process by accident.

    Trying to set a major "killer app" for CL post 1.0 would be a tragic missed opportunity. Wait until 1.0 comes out (when's that?), with solid implementations (when's that?), and people have experience and recommendations? Sounds like a recipe for years of delay. It may just mean that Khronos cannot innovate; that they are really best at standardizing innovation from NVIDIA, Intel, etc. That's not exactly a criticism - I understand it's hard for innovators to give up their first mover advantage or to telegraph their strategies.

    CL/GL interop is not a bad idea if you want two completely separate things that will remain separate (like CL and GL3 and below). CL and CL-G interop is built in by definition.


    Thanks -
    Cass
    Cass Everitt -- cass@xyzw.us

  9. #9
    Junior Member Regular Contributor
    Join Date
    Jul 2007
    Location
    Alexandria, VA
    Posts
    211

    Re: OpenGL Next via OpenCL

    An OpenCL-G would be an analog to what a Larrabee API would provide, right? I couldn't find the information about the OpenCL working group, but is Intel even on it? It seems we might be in for a whole bunch of API choices in the future for compute/graphics.

  10. #10
    Junior Member Regular Contributor
    Join Date
    Oct 2007
    Location
    Madison, WI
    Posts
    163

    Re: OpenGL Next via OpenCL

    Quote Originally Posted by cass
    CUDA and OpenCL essentially ignore graphics, OpenGL ignores compute, and DirectX Compute tries to bury a "general purpose" mode into the existing graphics abstraction. None of these seem to be a good fit for taking graphics forward in new and interesting ways on modern GPUs.
    Arguably the current hardware has some limitations which make a fully general purpose GPU API a complex problem, and require current graphics specific interfaces to be efficient.

    (1.) Lacking ability to start new tasks on the GPU without CPU involvement. Sure it might be possible to use a shader program to modify/write GPU side command buffers. Would have to abstract the common functionality of all GPUs into an API which can be called from within a shader. Some sick person (like myself) might actually enjoy doing this.

    (2.) IMO a primary limitation of CUDA is the lack of an efficient way to do bandwidth efficient general scatter of small values (with cache-able locality), and atomic operations on small scatter/gather values. This is basically what it seems that the PTX .surf or surface cache was designed for (functionality which hasn't been either exposed in CUDA or perhaps not yet found in the hardware).

    Of course the GPU has this type of functionality exposed via the ROP/OM. To my knowledge there is no way to emulate Z buffer like functionality efficiently in CUDA. However, one can currently make general scatter efficient as long as you scatter in a multiple of half warp sized objects (min 64 bytes/object).

    Seems like most current CUDA parallel solutions involve bandwidth bound scans/sorts of the entire data set or scatter via gather or just bandwidth wasting scatter. This isn't a good solution for the tremendous size of graphics data sets. Take a simple case of just trying to write out 2M z-buffered points per frame into a frame buffer, something which doesn't need any special raster hardware and is trivial and not bandwidth bound on DX/GL, but is a nightmare to do in CUDA.

    I still find the graphics APIs better for GPGPU, even with the triangle setup bound issues, often using the vertex shader for all computation (bypasses the lack of ALU efficiency for pixel primitives in the fragment shader) followed by using the fragment shader just to scatter the results with depth test used to gather the maximum or minimum result in case of a scatter collision.

    So while I would really like a fully general purpose GPU API for current hardware, how exactly would one currently do that without loosing the efficiencies of both graphics or compute?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •