Isn't discussing framebuffer object here a little off topic?
Isn't discussing framebuffer object here a little off topic?
That depends on the context. The question of the place of OpenCL and CUDA in OpenGL applications seems like "on topic". The question being probed can be stated from several angles, but one way is this. Should OpenGL gain a few minor capabilities to enable GLSL shaders to perform even more "general processes" than it does already? The GPU hardware clearly supports these extra capabilities, since OpenCL/CUDA take advantage of them.Originally Posted by k_szczech
I may be mistaken about the exact limits of what OpenGL/GLSL can do now to perform general processing (write to buffers), the question of whether OpenGL *should* provide those capabilities is another question. My opinion is, YES - OpenGL/GLSL *should* provide direct access to memory objects (IBOs, VBOs, textures, framebuffers/renderbuffers (color-buffer, depth-buffer, back-buffer, stencil-buffer)). Currently OpenGL/GLSL provides access to those entities - but not always "general-addressable" access, and not always "writable" access.
Essentially, the question is this. If the next release of OpenGL/GLSL provides object-relative-addressing and write-access to the memory objects listed above, could shader programmers write a wide variety of useful/common/valuable algorithms in GLSL that currently requires adopting an entirely new set of tools (CUDA, OpenCL, other)? My answer is YES - definitely.
Now, I do see the possibility/probability/certainty that certain kinds of algorithms and processes might be impractical to implement this way, unless GLSL provides some kind of mechanism to coordinate "threads" / GPU-processors (limit which work gets performed by which GPU-processors). While I'm not certain where the "sweet-spot" is, I'm confident one exists. What I mean is, we don't want to add features to GLSL that are required for (and therefore complicate) simple/conventional vertex-processing and fragment-processing, but why not provide mechansisms to support general processing within the context GLSL shaders?
Or to ask it another way, why not provide the most basic capabilities of OpenCL/CUDA to GLSL shaders? Why require whole extra sets of APIs, tools, languages, compilers, debuggers when the current ones will do (in many cases)? I know I prefer to keep my life simpler, if possible. I'd rather write everything in GLSL if I could. After all, both are multiprocessing for the exact same GPUs! So why not?
Yes, sloppy language. If we can read and write data from/to a texture-buffer/color-buffer/depth-buffer/stencil-buffer (attached to an FBO or otherwise), who cares which?Originally Posted by Korval
Well, I guess the answer is, sometimes we might want to write a special GPU shader program that procedurally generates a texture-pattern (or modifies a texture pattern [with noise]), or procedurally generates an object (indices/vertices into IBO/VBO), or procedurally generates/modifies depth information, or just generates/modifies a whole bunch of matrices and vectors to perform some/any intermediate computation (say, physics computations that then lead to VBO vertices being altered).
I've been working with the OpenGL API for about a year and a half while simultaneously working with the DX SDK trying to make up my mind about some future (currently present) decisions.
Quite honestly, I was looking forward to an object model for OpenGL 3.0 since I find most of the tasks in OpenGL quite tedious. I have had a love/hate relationship with OpenGL's extension mechanisms and we've now broken up completely.
Any future development of anything graphics related will be in Direct3D 10 since the release of OpenGL 3.0 simply has proven to me that I can't trust Khronos for any decisions which will ultimately turn into broken promises.
Also, a current SDK which I can access from my hard drive is really important to me since OpenGL issues, tutorials and documentation seem to come only from scattered online sources.
Another thing would be the prospect of contracting future OpenGL programmers which, as far as I can tell, is a limited bunch compared to the Direct3D user base.
In addition, getting capital for a new project might become a bit screwy when you declare: "We want this app to be cross-platform compatible, but only 1 IHV out of dozen IHVs and ISVs support the complete implementation of our chosen graphics API. This is going to set back development for at least six months, gimme your money now." I'd rather go with a (dare I say) currently conventional and strictly implemented API instead, thank you.
Yes, D3D 10's interface is not very clean but I can live with that. Hell, the WINAPI is a nasty bundle of garbage strung together with lint from a hobo's pants and navel, but I manage. At least I'll get somewhere without having to wait forever until a nice API comes along. Until then, D3D it is.
That is the most wonderful expression of my feelings for Win32 I have ever read.Originally Posted by Eddy Luten
Of course, low-level X programming is just as bad, if not worse, than Win32 programming. But at least the examples for X don't use hungarian notation.
The unofficial community-lead OpenGL SDK is in development! http://glsdk.sourceforge.net
For the FBO idea, I am talking about the "Ping Pong Method" vs. just writing to it to update the FBO data, am I thinking of this correctly?Originally Posted by Korval
And as I keep pointing out, FBOs have no data. They are a collection of pointers to data. They are a collection of references to textures and renderbuffers.For the FBO idea, I am talking about the "Ping Pong Method" vs. just writing to it to update the FBO data
So if you want to write to a texture or renderbuffer, that an FBO may use, fine. But you can't write anything to an FBO because there's nothing to write to.
Yes, that is what I meant! So thank you for clearing that up.Originally Posted by Korval
I've decided to learn D3D10 and use it for future personal applications. It's never a bad idea to know both. So far, I have had a pleasant experience with the API. I will continue using OpenGL for a couple of projects, and I will continue to check these great forums.
I'm sticking to GL for personal commercial projects, while at work I use DX on PC (not my choice) and the console specific APIs.
With GL3 I see no reason to use DX unless ATI drivers don't show up. I also prefer GL3 as well, especially with MapBufferRange()...
Timothy Farrar :: http://farrarfocus.blogspot.com