Computing Pixel Color via GLSL

Hi All,

I’ve got a program that takes a (screen sized) 2-D float array and uses a color map function to generate an RGBA pixel array. I am drawing the array via glDrawPixels, and it works fine, but I was wondering if I could move my color map function into some sort of shader program.

Essentially my goal is to move the serial computation of these colors into a shader program in the hopes of parallelizing it. In order to do this, I suppose I’d have to pass my float array to the GPU somehow, but given that I’m passing the same number of RGBA ints I figured I’d see some performance increase. It seems logical to compute the color within a shader, but having looked at fragment and vertex shaders I can’t really tell if either is what I’m looking for. I’m at a bit of a loss here, but if this post makes any sense I’d love some help.

Thanks,

John

I don’t know if i understod your question, but…i’ll try it :smiley:

you could generate a texture on client side (which means CPU) and pass it to the shader. The texture will just replace the 2D float array.

void glTexImage2D(GLenum target, GLint level, GLint internalFormat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid * data);

Watch out, the glTexImage2D function just takes a 1D float array as pixel array! To convert between 2D and 1D should not be the problem.

The most important step is now to setup your shader program. It must contain a vertex and fragment program. (At least a shader program must contain a vertex shader, but in your case you have to set a fragment program too!)
Now you have to render a quad in orthogonal projection mode over your whole screen (just a fullscreen quad).

Do not forget to activate your shader program before rendering your quad.

Now you can do something like this for drawing your texture to the screen:
(Fragment Shader)


#version 120

uniform sampler2D m_Texture;

void main(void) {
    gl_FragData[0] = texture2D(m_Texture, gl_TexCoord.st);
}

The texture function will read a pixel at the given location (gl_TexCoord.st) in your texture and this color will be applied to
the output variable which will set the color of your fullscreen quad.

But if you just want to calculate pixel color on the GPU take a look at:

  • Compute Shaders
  • OpenCL
    These two things will use the GPU just for computation and they will run with many, many threads for parallelizing.
    Compute shaders and OpenCL will start to be efficient from at least 200 threads (GPU depending!). I hope this is enough parallelizing for you :wink:

Hope this will help you a little bit!

  • Daniel

Thanks for your response, but I should have been more clear. I’m trying to optimize a physics simulation my professor showed me where we calculate the voltages in a 2-d field through time, getting a progression like this

imgur dot com / 3z4d0jW

At each time step we calculate the voltage, convert its value (as a float) into an RGBA int, and draw those ints via glDrawPixels. I’d like to move the conversion from float to RGBA int into a shader, if that’s possible, so I don’t have to spend CPU time doing it. Your edit was helpful, and I have looked into OpenCL (and will look into compute shaders.) However, it seemed intuitive (even conventional) to me to do this computation within a shader program rather than on the CPU, and if my intuition is wrong I’d like to correct it early on.