Geometry shader, point sprite to sphere

I’m working on transforming a set of point sprites to sphere using a geometry shader (i found that it could be a good idea).
RIght now, the vertex and fragment shader are working, they make the point sprites look like sphere but in 2D (the texture of the sphere is always facing you). When i’m trying to use the geometry shader below, nothing is rendered. HELP!!! lol!
Here the code :


// vertex shader		 
uniform float pointRadius;  // point size in world space
uniform float pointScale;   // scale to calculate size in pixels
uniform vec4 eyePos;
void main()
{
	vec4 wpos = vec4(gl_Vertex.xyz, 1.0);
    gl_Position = gl_ModelViewProjectionMatrix * wpos;

    // calculate window-space point size
    vec4 eyeSpacePos = gl_ModelViewMatrix * wpos;
    float dist = length(eyeSpacePos.xyz);
    gl_PointSize = pointRadius * (pointScale / dist);

    gl_TexCoord[0] = gl_MultiTexCoord0; // sprite texcoord
    gl_TexCoord[1] = eyeSpacePos;
    gl_FrontColor = gl_Color;
}


// pixel shader for rendering points as shaded spheres 
uniform float pointRadius;
uniform vec3 lightDir = vec3(0.577, 0.577, 0.577);
void main()
{
    // calculate eye-space sphere normal from texture coordinates
    vec3 N;
    N.xy = gl_TexCoord[0].xy*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
    float r2 = dot(N.xy, N.xy);
    if (r2 > 1.0) discard;   // kill pixels outside circle
    N.z = sqrt(1.0-r2);

    // calculate depth
    vec4 eyeSpacePos = vec4(gl_TexCoord[1].xyz + N*pointRadius, 1.0);   // position of this pixel on sphere in eye space
    vec4 clipSpacePos = gl_ProjectionMatrix * eyeSpacePos;
    gl_FragDepth = (clipSpacePos.z / clipSpacePos.w)*0.5+0.5;

    float diffuse = max(0.0, dot(N, lightDir));

    gl_FragColor = diffuse*gl_Color;
}


// geometry shader
//#version 120 
//#extension GL_EXT_geometry_shader4 : enable
const float radius = 0.5;
varying out vec2 coord; 
void main() 
{ 
	for (int i = 0 ; i < gl_VerticesIn ; ++i ) 
	{
		gl_FrontColor = gl_FrontColorIn[i];
		gl_Position = gl_PositionIn[i];
		EmitVertex ( );
	}
	gl_FrontColor = gl_FrontColorIn[0];

	coord = vec2( -1,-1 ); 
	gl_Position = (gl_PositionIn[0] + gl_ProjectionMatrix * vec4(-radius,-radius,0,0) ); 
	EmitVertex(); 
	coord = vec2( -1,1 ); 
	gl_Position = (gl_PositionIn[0] + gl_ProjectionMatrix * vec4(-radius,radius, 0,0) ); 
	EmitVertex(); 
	coord = vec2( 1,-1 ); 
	gl_Position = (gl_PositionIn[0] + gl_ProjectionMatrix * vec4( radius,-radius, 0,0) ); 
	EmitVertex(); 
	coord = vec2( 1,1 ); 
	gl_Position = (gl_PositionIn[0] + gl_ProjectionMatrix * vec4( radius,radius, 0,0) ); 
	EmitVertex();  
	EndPrimitive();
}

I haven’t checked what is the problem with the geometry shader but I would not suggest to replace your point sprite based sphere renderer with a geometry shader based one because performance will suffer badly. Here are some facts why:

  1. You’ll have a huge amount of primitives to rasterize, compared to the point sprite version. This, and considering that you want to draw millions of these spheres, means that your card can easily get to its rasterization throughput limit.

  2. Even if rasterization throughput is not clamping your performance, rasterizing the additional triangles will degrade performance anyway as more data has to be processed.

  3. Geometry shaders do not perform well when emitting loads of vertices. This is because those vertices has to be stored first in a temporary buffer which limits the number of parallel cores that execute your geometry shader. This will cause severe performance drop, especially on early geometry shader capable NVIDIA cards. But any card will suffer heavily from this

  4. You need some cycles in your geometry shaders to construct a sphere geometry, which is not really parallel. Instanced geometry shaders introduced by GL4.0 solve this problem somewhat, but the main principle applies: geometry shaders are not for geometry amplification.

My main question is why you want to go with a geometry shader based solution if you have your point sprite based solution up and running? The later one should be orders of magnitude faster.

(Btw, looking at your geometry shader it barely looks like something that creates a sphere geometry, rather it looks like a geometry shader that is meant to emit point sprites or billboards, whatever you call them, just there is some strange for cycle at the beginning which emits the received vertices. It all looks weird.)

the texture of the sphere is always facing you

Then you’re not mapping the texture correctly.

Pulling off a textured sphere impostor requires selecting texture coordinates based on the view direction for that fragment. This is really just a direction/sphere intersection test, which is simple math.

I would be more than happy to apply it correctly :)! Do you have some example of code for doing this? Here a picture of my actual project.

M//Hax : funny how do not apply advice given to you multiple times.
Just as the camera rotates, rotate the uniform having light direction vector !

Well, i’m not always able to translate text into code, so i generally need some example to be able to make it work!

Here two texture i found. Someone told me i could use them to transform the point sprite in real spheres using shaders. I’m able to use the texture in the shader and to see the point sprite as sphere. I’m just unable to transform the point sprite in real sphere using the sphere1.png and depth.png file. Anyone can help me with this? Thanks.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.