Hi all,
For months I’ve been looking for a way to combine the speed of GPU processing with the feature of allowing arbitrary user code at runtime (otherwise known as scripting, or dynamic code generation, or Reflection, which I’ll refer to as from now on). CUDA and OpenCL are great languages, but unfortunately, they don’t allow this kind of reflection I need. On the other hand, languages like Lua or C# do allow reflection, but alas, they don’t support the GPU as yet.
Then I came across a program called ‘Fragmentarium’ which I later found out does use the GPU and allows reflection. It also happens to use this thing I’ve barely heard about called GLSL (yes I think that might have some relevance here! ) - and a quick peek at Wikipedia confirms the situation:
“GLSL shaders themselves are simply a set of strings that are passed to the hardware vendor’s driver for compilation from within an application using the OpenGL API’s entry points. Shaders can be created on the fly from within an application, or read-in as text files, but must be sent to the driver in the form of a string.”
I want to program a raytracer with custom user 3D functions (and even custom user renderers/raytracers) at runtime. The functions could be as simple as: “x^2+y^2+z^2 < 1” to create a sphere, or something more sophisticated such as the Mandelbulb function (still only 10 lines of code or so). I would also like the large renderer code (<1000 lines) to use the GPU for fast rendering and for building the scene. It’s all software based rendering - I don’t want to use the GPU’s inherent 3D capabilities.
Well, after asking around, no one has ever suggested GLSL as a solution before, but it seems very interesting. What I am most interested in is hearing the disadvantages compared to coding in say CUDA, as I hear GLSL allows fairly general programming constructs (while, for, local variables etc.). For example, can I create a raytracer using GLSL and have it running as fast as CUDA would allow? Specifically, I make use of CUDA’s 2D spatial memory locality for my semi-random accesses (so that nearby areas around a pixel in a 2D picture are cached in memory), and also so-called “shared memory” so that the programmer can define which lucky set of data has special fast cached access (has to be less than 16-64k’s worth). Using those I gain 15x speedup over purely CPU-generated code. As I reckon some of you may know, both of these CUDA features are to partially remedy the burden of reaching out to the GPU’s large device/global memory, which can be prohibitive.
So does GLSL support these (2D spatial locality and shared memory) ?
And what are the other advantages and disadvantages of GLSL compared to coding in CUDA?