# The Industry's Foundation for High Performance Graphics

#### from games to virtual reality, mobile phones to supercomputers

1. Hmm, I think you want a billboard constrained to rotate around y, correct? How do you account for the rotation? In the vertex shader and simply expand the resulting quad in the geometry shader? Currently I don't see why you would need a geometry shader to do that.

2. I don't want the rotation constrained in any axis in this version of the shader.
I'm using the geometry shader so that I only need to send one vertex for each billboard. It simplifies the rest of the system. It's also nice later when you want to create a particle system, as you could simulate the particle updates in the vertex shader (perhaps with transform feedback), so the vertex shader is only doing 1/4 of the work it would be if you were sending over a whole quad for each particle.

3. At second glance I can see that the coordinate base is align with the eye space vector of your point, i.e. the eye space vector is the -z axis of your coordinate system. As I picture it in my mind you actually do a rotation with that and it seems pretty clever to me.

Although I assume you're aware of it mind that in the following code

Code :
```vec3 x = normalize( cross( viewPos, vec3(0,1,0) ) );
vec3 y = normalize( cross( x, viewPos ) );```

x may become the zero vector if viewPos and vec3(0, 1, 0) are parallel and thus y subsequently becomes the zero vector. If you're only in a first-person sort of view that's fine. In a third-person view, however, this might cause problems.

so the vertex shader is only doing 1/4 of the work
One might think that that will always be an improvement. However, especially with a lot of particles and thus a lot of primitives generated, you should profile if using the GS in this way doesn't degrade performance substantially . Do you intend to use a single particle origin and emit particles from there using the GS?

4. The camera should always be at the origin in view space, unless you are doing something funky with your view and projection matrices (and "camera position" doesn't make much sense with an orthographic projection).

I'll create one vertex for each particle, so the geometry shader will only ever expand one point into a quad. It won't be producing a variable sized output. I will need to profile it once I've got a proper system set up with some real world examples. I would imagine a particle system would likely be fill rate limited long before any overhead from the geometry shader becomes important, though, so I'll probably keep it for its simplicity.

5. I would imagine a particle system would likely be fill rate limited long before any overhead from the geometry shader becomes important[..]
If particles cover only a small part of the screen, i.e. a negligible amount of fragments is rasterized and your fragment shader isn't ridiculously complex to begin with, the application won't become fillrate limited. If the number of GS writes is sufficiently high though and isn't reduced with respect to the distance of the viewer to particle system then it will easily limit your app. Also, in my experience the geometry shader performance varies dramatically between hardware generations.

Someone should do some profiling on GS performance on newer cards.

6. It seems like for some reason gl_Position and gsTexCoord is aliased and they both use the same "physical" interpolant in this case.
Have you tried using the gsTexCoord in your fragment shader? I'm pretty sure that even though gl_Position is now working, because you have written it the last, but now gsTexCoord will have the values written to gl_Position too. Can you check this?

Btw, are you using separate shader objects? As there gl_Position has to be explicitly declared. Even if not, I would give it a try to explicitly declare gl_Position as specified in the GLSL spec.

7. [quote] Catalyst 12.4 with a 6870 on Win7 x64. {/quote]
Your driver is new than mine. I will update an see if my code still works

8. The texture coordinates appear to be working fine in the fragment shader, and redeclaring the gl_PerVertex block did not allow me to assign gl_Position earlier (I wasn't using separate shader objects).

9. In that case you should file a bug with AMD.

10. I just updated my driver on a 5870 and it works fine. This draws a cross facing the camera at each point

Code :
```void main()
{
gl_Position = u_CameraModelView * vec4(Position,1);
vData.vColour = Colour;
}```

Code :
```layout( points ) in;
layout( line_strip, max_vertices = 4 ) out;

in vData_Struct
{
vec4 vColour;
} vData[];

out vec2 gTexCoord;
out vec4 gColour;

void main()
{
gl_Position = u_CameraProjection * (vec4(-u_Radius,0.0,0.0,0.0) + gl_in[0].gl_Position);
gColour = vData[0].vColour;
gTexCoord = vec2(-1.0,-1.0);
EmitVertex();

gl_Position = u_CameraProjection * (vec4(u_Radius,0.0,0.0,0.0) + gl_in[0].gl_Position);
gColour = vData[0].vColour;
gTexCoord = vec2(1.0,-1.0);
EmitVertex();

EndPrimitive();

gl_Position = u_CameraProjection * (vec4(0.0,-u_Radius,0.0,0.0) + gl_in[0].gl_Position);
gColour = vData[0].vColour;
gTexCoord = vec2(-1.0,1.0);
EmitVertex();

gl_Position = u_CameraProjection * (vec4(0.0,u_Radius,0.0,0.0) + gl_in[0].gl_Position);
gColour = vData[0].vColour;
gTexCoord = vec2(1.0,1.0);
EmitVertex();

EndPrimitive();
}```

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•