How to combine a point fragment shader with a blur fragment shader ?

Hi all,

I’m trying like to display thousands of points on a 3D canvas (I’m using Processing) with a Depth of Field effect.
More specifically, I would like to use a z-buffer (depth buffering) to adjust the level of blur of a point based on its distance from the camera.

So far, I could come up with the following point shader:

pointfrag.glsl

#ifdef GL_ES
    precision mediump float;
    precision mediump int;
    #endif
    
    varying vec4 vertColor;
    uniform float maxDepth;
     
    void main() {
    
      float depth = gl_FragCoord.z / gl_FragCoord.w;
      gl_FragColor = vec4(vec3(vertColor - depth/maxDepth), 1) ;
    
    }

pointvert.glsl

  uniform mat4 projection;
    uniform mat4 modelview;
    
    attribute vec4 position;
    attribute vec4 color;
    attribute vec2 offset;
    
    
    varying vec4 vertColor;
    varying vec4 vertTexCoord;
    
    void main() {
      vec4 pos = modelview * position;
      vec4 clip = projection * pos;
    
      gl_Position = clip + projection * vec4(offset, 0, 0);
    
      vertColor = color;
    }

I also have a blur shader (originally from the PostFX library):

blurfrag.glsl

    #ifdef GL_ES
    precision mediump float;
    precision mediump int;
    #endif
     
     
    #define PROCESSING_TEXTURE_SHADER
     
    uniform sampler2D texture;
     
    // The inverse of the texture dimensions along X and Y
    uniform vec2 texOffset;
     
    varying vec4 vertColor;
    varying vec4 vertTexCoord;
     
    uniform int blurSize;       
    uniform int horizontalPass; // 0 or 1 to indicate vertical or horizontal pass
    uniform float sigma;        // The sigma value for the gaussian function: higher value means more blur
                                // A good value for 9x9 is around 3 to 5
                                // A good value for 7x7 is around 2.5 to 4
                                // A good value for 5x5 is around 2 to 3.5
                                // ... play around with this based on what you need <span class="Emoticon Emoticon1"><span>:)</span></span>
     
    const float pi = 3.14159265;
     
    void main() {  
      float numBlurPixelsPerSide = float(blurSize / 2); 
     
      vec2 blurMultiplyVec = 0 < horizontalPass ? vec2(1.0, 0.0) : vec2(0.0, 1.0);
     
      // Incremental Gaussian Coefficent Calculation (See GPU Gems 3 pp. 877 - 889)
      vec3 incrementalGaussian;
      incrementalGaussian.x = 1.0 / (sqrt(2.0 * pi) * sigma);
      incrementalGaussian.y = exp(-0.5 / (sigma * sigma));
      incrementalGaussian.z = incrementalGaussian.y * incrementalGaussian.y;
     
      vec4 avgValue = vec4(0.0, 0.0, 0.0, 0.0);
      float coefficientSum = 0.0;
     
      // Take the central sample first...
      avgValue += texture2D(texture, vertTexCoord.st) * incrementalGaussian.x;
      coefficientSum += incrementalGaussian.x;
      incrementalGaussian.xy *= incrementalGaussian.yz;
     
      // Go through the remaining 8 vertical samples (4 on each side of the center)
      for (float i = 1.0; i <= numBlurPixelsPerSide; i++) { 
        avgValue += texture2D(texture, vertTexCoord.st - i * texOffset * 
                              blurMultiplyVec) * incrementalGaussian.x;         
        avgValue += texture2D(texture, vertTexCoord.st + i * texOffset * 
                              blurMultiplyVec) * incrementalGaussian.x;         
        coefficientSum += 2.0 * incrementalGaussian.x;
        incrementalGaussian.xy *= incrementalGaussian.yz;
      }
     
      gl_FragColor = (avgValue / coefficientSum);
    }

Question:

  • How can I combine the blur fragment shader with the point fragment shader ?

Ideally I’d like to have one single fragment shader that computes the level of blur based on the z-coordinate of a point. Is that even possible ?

Any help would be greatly appreciated.


An example sketch displaying points using the pointfrag.glsl and pointvert.glsl shaders above:

sketch.pde (PeasyCam library needed)

import peasy.*;
import peasy.org.apache.commons.math.*;
import peasy.org.apache.commons.math.geometry.*;

PeasyCam cam;
PShader pointShader;
PShape shp;
ArrayList<PVector> vectors = new ArrayList<PVector>();

void setup() {
  size(900, 900, P3D);
  frameRate(1000);
  smooth(8);
  
  cam = new PeasyCam(this, 500);
  cam.setMaximumDistance(width);
  perspective(60 * DEG_TO_RAD, width/float(height), 2, 6000);
  
  double d = cam.getDistance()*3;
  
  pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
  pointShader.set("maxDepth", (float) d);

  
  for (int i = 0; i < 5000; i++) {
    vectors.add(new PVector(random(width), random(width), random(width)));
  }
  
  shader(pointShader, POINTS);
  strokeWeight(2);
  stroke(255);
    
  shp = createShape();
  shp.beginShape(POINTS);
  shp.translate(-width/2, -width/2, -width/2);  
  for (PVector v: vectors) {
    shp.vertex(v.x, v.y, v.z);
  }
  shp.endShape();
  
}

void draw(){
  background(0);
  shape(shp, 0, 0);

  cam.rotateY(.0001);
  cam.rotateX(.00005);
  
  println(frameRate);  
} 

(see the sketch here: https://imgur.com/mf5dVRn)

[ATTACH=CONFIG]1794[/ATTACH]

Yes. Blurring points is somewhat simpler than applying a post-process depth-of-field effect to a scene.

Enable GL_PROGRAM_POINT_SIZE. In a legacy or compatibility profile context, enable GL_POINT_SPRITE (in 3+ core profile, this is automatic). Enable GL_BLEND and use glBlendFunc(GL_SRC_ALPHA,GL_ONE). Disable GL_DEPTH_TEST.

The vertex shader needs to set gl_PointSize according to the depth, so that the whole of the blur is within the bounds of the rendered point.

The fragment shader just needs to calculate the opacity based upon gl_PointCoord. E.g.


float alpha = 1.0-smoothstep(0.0, 0.5, length(gl_PointCoord-0.5));

@GClements Thank you for your reply and suggestions. Unfortunately I’m not quite sure you’re answering my question or maybe it’s just me not understanding your answer.

The fragment shader just needs to calculate the opacity based upon gl_PointCoord. E.g.

The vertex shader needs to set gl_PointSize according to the depth

In the sketch example I posted (GIF) the points already get darker as they get farther from the camera. They get smaller as well.

I was asking for help to make a “mix” between the point fragment shader and the blur fragment shader. Here is my attempt:

pointvert.glsl (just added ‘vertTexCoord = vertTexCoord;’ in ‘main()’)

uniform mat4 projection;
uniform mat4 modelview;

attribute vec4 position;
attribute vec4 color;
attribute vec2 offset;


varying vec4 vertColor;
varying vec4 vertTexCoord;

void main() {
  vec4 pos = modelview * position;
  vec4 clip = projection * pos;

  gl_Position = clip + projection * vec4(offset, 0, 0);

  vertTexCoord = vertTexCoord;

  vertColor = color;
}

pointfrag.glsl (a mix between the original pointfrag.glsl and blurfrag.glsl)


#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec4 vertColor;
uniform float maxDepth;

uniform sampler2D texture;
uniform vec2 texOffset; 
varying vec4 vertTexCoord;
 
uniform int blurSize;       
uniform int horizontalPass; 
uniform float sigma;                           
const float pi = 3.14159265;
 
void main() {

float numBlurPixelsPerSide = float(blurSize / 2); 
 
  vec2 blurMultiplyVec = 0 < horizontalPass ? vec2(1.0, 0.0) : vec2(0.0, 1.0);
 
  vec3 incrementalGaussian;
  incrementalGaussian.x = 1.0 / (sqrt(2.0 * pi) * sigma);
  incrementalGaussian.y = exp(-0.5 / (sigma * sigma));
  incrementalGaussian.z = incrementalGaussian.y * incrementalGaussian.y;
 
  vec4 avgValue = vec4(0.0, 0.0, 0.0, 0.0);
  float coefficientSum = 0.0;

  avgValue += texture2D(texture, vertTexCoord.st) * incrementalGaussian.x;
  coefficientSum += incrementalGaussian.x;
  incrementalGaussian.xy *= incrementalGaussian.yz;

  for (float i = 1.0; i <= numBlurPixelsPerSide; i++) { 
    avgValue += texture2D(texture, vertTexCoord.st - i * texOffset * 
                          blurMultiplyVec) * incrementalGaussian.x;         
    avgValue += texture2D(texture, vertTexCoord.st + i * texOffset * 
                          blurMultiplyVec) * incrementalGaussian.x;         
    coefficientSum += 2.0 * incrementalGaussian.x;
    incrementalGaussian.xy *= incrementalGaussian.yz;
  }
 

  float depth = gl_FragCoord.z / gl_FragCoord.w;
  gl_FragColor = vec4(vec3(vertColor - depth/maxDepth), 1); // gl_FragColor from the point shader
  gl_FragColor =  (avgValue  / coefficientSum); // gl_FragColor from the blur shader


}

As you can see I don’t know how to mix the depth information with the blur computation at the end of main()

  float depth = gl_FragCoord.z / gl_FragCoord.w;
  gl_FragColor = vec4(vec3(vertColor - depth/maxDepth), 1); // gl_FragColor from the point shader
  gl_FragColor =  (avgValue  / coefficientSum); // gl_FragColor from the blur shader

What I’d love to know:

  • Is this combination of the 2 fragment shader correct ?
  • How to mix the depth information with the blur computation in main() ?

I hope I could make myself clear. Apologies if that still doesn’t make sense for you.

If the only thing you’ll be blurring is a point, then, rather than actually doing the blurs on-the-fly (proper Gaussian blurs are quite expensive!), I’d pre-compute all needed blur levels, save them as little textures, and simply display appropriate texture instead of the point.

@Utumno Thanks for your suggestion. For now I really want to try to mix the 2 fragment shaders and see for myself. I’ll consider alternatives later.

I would really need your help to debug the pointfragment shader I wrote above.
For example, I don’t understand why the points aren’t displayed when I combine the blufragment shader with the pointframent shader ? Shouln’t it display blurred points at least ?

pointvert.glsl

uniform mat4 projection;
uniform mat4 modelview;
uniform mat4 texMatrix;


attribute vec4 position;
attribute vec4 color;
attribute vec2 offset;
attribute vec2 texCoord;



varying vec4 vertColor;
varying vec4 vertTexCoord;

void main() {
  vec4 pos = modelview * position;
  vec4 clip = projection * pos;

  gl_Position = clip + projection * vec4(offset, 0, 0);

  vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);

  vertColor = color;
}


pointfrag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec4 vertColor;
uniform float maxDepth;

uniform sampler2D texture;
uniform vec2 texOffset; 
varying vec4 vertTexCoord;
 
uniform int blurSize;       
uniform int horizontalPass; 
uniform float sigma;                           
const float pi = 3.14159265;
 
void main() {

float numBlurPixelsPerSide = float(blurSize / 2); 
 
  vec2 blurMultiplyVec = 0 < horizontalPass ? vec2(1.0, 0.0) : vec2(0.0, 1.0);
 
  vec3 incrementalGaussian;
  incrementalGaussian.x = 1.0 / (sqrt(2.0 * pi) * sigma);
  incrementalGaussian.y = exp(-0.5 / (sigma * sigma));
  incrementalGaussian.z = incrementalGaussian.y * incrementalGaussian.y;
 
  vec4 avgValue = vec4(0.0, 0.0, 0.0, 0.0);
  float coefficientSum = 0.0;

  avgValue += texture2D(texture, vertTexCoord.st) * incrementalGaussian.x;
  coefficientSum += incrementalGaussian.x;
  incrementalGaussian.xy *= incrementalGaussian.yz;

  for (float i = 1.0; i <= numBlurPixelsPerSide; i++) { 
    avgValue += texture2D(texture, vertTexCoord.st - i * texOffset * 
                          blurMultiplyVec) * incrementalGaussian.x;         
    avgValue += texture2D(texture, vertTexCoord.st + i * texOffset * 
                          blurMultiplyVec) * incrementalGaussian.x;         
    coefficientSum += 2.0 * incrementalGaussian.x;
    incrementalGaussian.xy *= incrementalGaussian.yz;
  }


  gl_FragColor =  (avgValue  / coefficientSum)  ;


}

So when you run the above, all you see is an empty screen?

I’d try to ‘divide and conquer’ the bug by removing code. First try commenting out everything from fragment shader’s main() function and setting every pixel there to GREEN:

main()
{
gl_FragColor = vec4(0.0,1.0,0.0,1.0);
}

Can you see a GREEN screen now? If yes, the bug was inside frag shader’s main(), if not - it’s somewhere in vertex shader or the app…

The code is quite inefficient. For example this

vec3 incrementalGaussian;
  incrementalGaussian.x = 1.0 / (sqrt(2.0 * pi) * sigma);
  incrementalGaussian.y = exp(-0.5 / (sigma * sigma));
  incrementalGaussian.z = incrementalGaussian.y * incrementalGaussian.y;

can be moved to the CPU, done only once and sent as a uniform (rather than re-doing it millions of times in each fragment shader).

Actually what I recommend is pre-computing an array of blur offsets in CPU in a way described here:

https://software.intel.com/en-us/blogs/2014/07/15/an-investigation-of-fast-real-time-gpu-based-image-blur-algorithms