Mipmapping on Point Sprites

Hi,

My GPU particle simulator makes extensive use of point sprites. On each particle is mapped a 83x83 mipmapped texture. The mipmapping operation performs correctly for other textures. In my application, the mipmaps highly blur the texture, so much that it completely loses its original shape.

The texture is completely white, but has a faded circular alpha channel. The minification filter is set to GL_LINEAR_MIPMAP_LINEAR, and the edge is set to GL_CLAMP.

The following screenshot demonstrates the problem. Point sizes are determined by distance. Notice that the closer points appear mostly okay (some have a weird edge border thing which I can’t account for). But further ones appear completely rectangular:
From the draw shader’s fragment program (GLSL):

//object_vertex means object-space vertex.
vec2 v_rot = normalize(object_vertex.zw);
vec4 l_uv = vec4(0.0,0.0,gl_PointCoord.xy);
l_uv.zw-=vec2(0.5,0.5);l_uv.x=l_uv.z*v_rot.x;l_uv.y=l_uv.w*v_rot.x;l_uv.x-=l_uv.w*v_rot.y;l_uv.y+=l_uv.z*v_rot.y;
color = texture2D(tex2D_1,l_uv.xy+vec2(0.5,0.5));

When drawn, GL_POINT_SPRITE and GL_VERTEX_PROGRAM_POINT_SIZE are enabled.

Before I post more code, is there anything anyone knows of that can cause this?

Thanks,
-G

Do things improve if you change the texture size to 64x64 or 128x128?

Marginally if any, but not enough. Even at 1024x1024 the farther points are boxy.

In my application, the mipmaps highly blur the texture, so much that it completely loses its original shape.

What happens if you turn off mipmapping? I don’t see the problem in the given picture. More distant circles, when rasterized, will increasingly look more like squares. Until the rasterized size is down to only a pixel.

Maybe if you showed a larger picture.

some have a weird edge border thing which I can’t account for

Probably your use of GL_CLAMP instead of GL_CLAMP_TO_EDGE.

Even the nearest sprites seem to have whitish corners.
How do you build your mipmaps ?

Anyway, this might be a good use case for texture border color !


glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
GLfloat color[4]={1,1,1,0};
glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, color);

Probably your use of GL_CLAMP instead of GL_CLAMP_TO_EDGE.
Oops, that’s what I meant. I did have GL_CLAMP_TO_EDGE instead of GL_CLAMP.

Like so:

//After glTexImage2D(...)
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_GENERATE_MIPMAP,GL_TRUE);

Thanks,

I’ve seen problems when using GL_GENERATE_MIPMAP on certain hardware and I suspect that this may be your root cause. The best filter to use for generating sublevels is commonly a simple box filter, and so far as I’m aware hardware mipmapping doesn’t support this: it’s either point or linear. I suspect that even something as crude as using a power of 2 texture and the old gluBuild2DMipmaps function (which uses a box filter) might give you higher quality than what you currently have. You can also implement your own mip level generation in software.

GL_GENERATE_MIPMAP is also an old old extension, one that predates the common availability of non-power-of-two textures in hardware, and support may not be completely robust. If you still want to use hardware mipmapping glGenerateMipmap should be the preferred option.