GLSL PointSprites different sizes

Hi,

I would like to make a particle system using GLSL, PointSprites and VBOs. I know generally there is no way to set each Point size differently in ordinary openGL. I thoght about doing that in the vertex shader by passing the size of each particle in the NormalArray of the VBO (since I don’t need the normals), or maybe in the color arrays alpha value? Does anybody know if thats possible?

Furthermore if I would do it like that, does anybody know how to compute the attanuation of the pointSprites in the vertex shader?

I currently don’t have any code, I am just asking what you think of my idea. If It won’t work that way I will think of something new. Any Hints welcome!

glEnable(GL_VERTEX_PROGRAM_POINT_SIZE)

In the shader you just use gl_PointSize = psize;

It’s up to you where you get psize from. From an array, or programmatically.

A lot of cool fragment ‘fountain’ style shaders use one channel of the color element from the colour array for example. Others calculate it based on distance etc.

:slight_smile:

Hi, well I just made a first test and it works.

@ scratt: well I want different pointsizes from my VBO so I can’t simply pass it as a uniform. I passed it along with the normalArray though and that works fine.

the remaining question is how can I compute the attenuation of the points in my vertex shader so that points more far away from the camera become smaller?

right now my code looks like that:

vertex shader:


void main() {
		gl_FrontColor=gl_Color;
        gl_PointSize = gl_Normal.x;
		gl_Position = ftransform();
} 


fragment:


uniform sampler2D tex;

void main(void)
{
    gl_FragColor = texture2D(tex, gl_TexCoord[0].xy);
}

vec4 camera_pos = gl_ModelViewMatrixInverse[3];

May work for you…

Other than that you could simply ape the OpenGL equation for attenuation in your shader.

If you don’t mind using fixed function you can just let that handle it and ignore the gl_pointsize stuff in your shader, but do all the other stuff in your shader.

hehe but than I can’t use different pointsizes for each point in my VBO. thats the only reason I did this :slight_smile:

Where do i find the fixed function equation? how would I mix it with the point size I pass to the shader?

Thank you very much!

the remaining question is how can I compute the attenuation of the points in my vertex shader so that points more far away from the camera become smaller?

In standard OpenGL that’s done with GL_EXT_point_parameters, like this (look into GL_EXT_point_parameters):

float att[3] = {0.0f, 1.0f, 0.0f};
glPointParameterfEXT(GL_POINT_SIZE_MIN, 1.0f);
glPointParameterfEXT(GL_POINT_SIZE_MAX, 256.0f); // NVIDIA supports up to 8192 here.
glPointParameterfvEXT(GL_POINT_DISTANCE_ATTENUATION, att);
glEnable(GL_POINT_SPRITE);

In GLSL like this (disclaimer that should be old GLSL 1.1 code)
with clamped minimum size and fade threshold.


void main(void)
{
  vec4 eyeCoord = gl_ModelViewMatrix * gl_Vertex;
  gl_Position = gl_ProjectionMatrix * eyeCoord;
  float dist = distance(eyeCoord, vec4(0.0, 0.0, 0.0, 1.0));
  float att = sqrt(1.0 / (gl_Point.distanceConstantAttenuation +
                         (gl_Point.distanceLinearAttenuation +
                          gl_Point.distanceQuadraticAttenuation * dist) * dist));
 float size = clamp(gl_Point.size * att, gl_Point.sizeMin, gl_Point.sizeMax);
 gl_PointSize = max(size, gl_Point.fadeThresholdSize);
 float fade = min(size, gl_Point.fadeThresholdSize) / gl_Point.fadeThresholdSize;
 fade = fade * fade * gl_Color.a;
 gl_FrontColor = vec4(gl_Color.rgb, fade);
}

You confused me on that bit… :slight_smile:
You said you are sending the size as a Uniform.
From that I assumed you were setting it up for one batch of points.
Uniforms AFAIK are not the “recommended” way to send dynamic data to a shader…

hmm weird. Both versions don’t do anything for me. Neither the fixed function nor the shader one. If I use the shader one, Do I still have to define those:
float att[3] = {0.0f, 1.0f, 0.0f};
glPointParameterfEXT(GL_POINT_SIZE_MIN, 1.0f);
glPointParameterfEXT(GL_POINT_SIZE_MAX, 256.0f); // NVIDIA supports up to 8192 here.
glPointParameterfvEXT(GL_POINT_DISTANCE_ATTENUATION, att);
glEnable(GL_POINT_SPRITE);

if not, what are the default values for gl_Point.distanceQuadraticAttenuation ect. I rather define them directly since I had a lot of problems in the past accessing things like that. Thank you!

Here’s a real quick and dirty piece of code from a sandbox shader I have…

	vec4 camera_pos = gl_ModelViewMatrixInverse[3];	
	float dist = distance(gl_Position.xyz, camera_pos.xyz);
	float psize = (radius * 10.0) / dist;
	gl_PointSize = psize;

Not really attenuation, but it should give you a starting point.
Otherwise take the example above, make your own uniforms variables for the GL ones, and send that data in via Uniforms. :slight_smile:

that does not really work for me, thank you anyways!

Any other ideas for the version Relic posted?

Okay, I fixed it using

float att[3] = {1.0f, -0.01f, -0.00001f};

this as the att factors even though I don’t really fully understand it.

These attenuation coefficients are for constant, linear and quadratic attenuation with respect to the distance to the viewer which is as the origin (0,0,0) in eye-space in that formula. (The attenuation for positional lights works just like that, just without the sqrt()).

The default is {1.0, 0.0, 0.0} which means no attenuation because the result is 1 all the time.
Negative values don’t make much sense there, because the point size should get smaller depending on the distance to the viewer.

Try this setup with the shader above


    GLfloat psr[2];
    GLfloat pda[3] = { 1.0f, 0.0f, 1.5f }; // defaults are (1.0, 0.0, 0.0)

    glGetFloatv(GL_SMOOTH_POINT_SIZE_RANGE, psr);
    glPointSize(psr[1] < 32.0f ? psr[1] : 32.0f);
    // Requires OpenGL 1.4
    glPointParameterf(GL_POINT_SIZE_MIN, psr[0]);
    glPointParameterf(GL_POINT_SIZE_MAX, psr[1]);
    glPointParameterf(GL_POINT_FADE_THRESHOLD_SIZE, 3.0f);
    glPointParameterfv(GL_POINT_DISTANCE_ATTENUATION, pda);

    glEnable(GL_VERTEX_PROGRAM_POINT_SIZE_ARB);
    glEnable(GL_POINT_SPRITE);

    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    glEnable(GL_BLEND);

and draw some points from your frustum’s zFar to zNear range.
They should get smaller going from front to back and start fading (alpha blending) if their size would be smaller than 3.0.

If you want to vary incoming point sizes on top of that, just modulate the attenuation with that per-point size of yours.

if I use your attenuation setup The points are all the same size and way to small. I set their size to 40 and they all appear at the same size and really small.

How do i know what att values I need for my setup?

try fiddling with the coefficients a bit. Experimentation is key.

P.S. Nice axe, Relic :wink:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.