PDA

View Full Version : Geometry Shader Confusion



ViolentHamster
09-02-2009, 12:10 PM
I apologize if this has been asked before. Can someone please provide some enlightenment on geometry shaders? The realtime rendering book seems to be unsure about practical uses. Will a simple pass through geometry shader hurt performance over not having a geometry shader? Does the upcoming tessellator make geometry shaders obsolete?

Thanks.

ZbuffeR
09-02-2009, 01:02 PM
Sorry, no hands-on experience on the subject.

But found with 'geometry shader performance' :
http://hacksoflife.blogspot.com/2008/09/geometry-shader-performance-on-8800.html

Apparently pass-through does not change performance compared with no geometry shader.

Alfonse Reinheart
09-02-2009, 03:50 PM
Does the upcoming tessellator make geometry shaders obsolete?

The one feature that is not covered by tessellators is the ability that geometry shaders have to select which render target layer that each the primitive gets rendered to.

Of course, tessellators require more hardware than geometry shaders.

Dark Photon
09-03-2009, 02:11 PM
I apologize if this has been asked before. Can someone please provide some enlightenment on geometry shaders? The realtime rendering book seems to be unsure about practical uses.
From here (http://en.wikipedia.org/wiki/Geometry_shader): "point sprite generation, geometry tessellation, shadow volume extrusion, and single pass rendering to a cube map. ... automatic mesh complexity modification". And allegedly DOF/Motion Blur, generating silhouettes and crease edges, etc. are some uses folks have put it in.

However, for NVidia GPUs, check NVidia's advice in the latest GPU Programming Guide: don't use them for tessellation, and the more output a geometry shader generates (verts * floats/vert), the slower you'll go. Reading between the lines, they basically say avoid geometry shaders like the plague except for really simple cases where not much data is output:

"in general, the potential for wasted work and performance penalties for using a GS makes it an often unused feature of Shader model 4.", "make sure that you really need them, and that there is no better alterantive", "A Decent Use of Geometry Shader: Point Sprites".

However, the latest NVGPUPG doesn't cover GTX2xx+, which is allegedly when perf improved a bit. As with all things perf, try them and see.

Some benches (http://ixbtlabs.com/articles3/video/gt200-part1-p10.html) using the point sprites.


Will a simple pass through geometry shader hurt performance over not having a geometry shader?
Why would you? At any rate, it's easy enough to just try.
The operative question is whether the hardware/driver configures the GPU differently with no geometry shader versus with a no-op geometry shader you provide, and if so, if you can ever tell the difference. Vendor-specific, so have to just try it.

knackered
09-03-2009, 02:46 PM
They were slow as f*ck last year, when I tried them. I think I'd rather have the plague than use them again.
No need to thank me for my thorough scientific analysis.


"A Decent Use of Geometry Shader: Point Sprites".
lol. Yep, point sprites is just about it as far as geometry shaders go. In the real world, anyway.

I think the guy writing Atom uses them a lot - but he procedurally generates everything, and is therefore used to low frame rates.

Dark Photon
09-03-2009, 03:02 PM
They were slow as f*ck last year, when I tried them. I think I'd rather have the plague than use them again.
That's interesting. So what did you end up using instead (or were you just giving them a spin)?

Heiko
09-04-2009, 03:00 AM
How does z-culling work when rendering to multiple slices in a 3d texture with the geometry shader? Does each slice have its own z-buffer?

Say for example you want to render n different views of the same geometry in a single pass. To implement this you apply n different modelviewprojection matrices to the geometry and render them to n slices in a 3d texture (or texture array?). How does z-culling work in this case?

Alfonse Reinheart
09-04-2009, 10:24 AM
How does z-culling work when rendering to multiple slices in a 3d texture with the geometry shader? Does each slice have its own z-buffer?

In order to use layered rendering (which is what this is), each attachment in the FBO must have the same number of layers. So if you're rendering to a 3D texture with 30 slices, you must also have a 3D depth texture with 30 slices.

Brolingstanz
09-05-2009, 11:55 AM
Say for example you want to render n different views of the same geometry in a single pass. To implement this you apply n different modelviewprojection matrices to the geometry and render them to n slices in a 3d texture (or texture array?).

Yep. For say RTT cubemap you could bind depth and color cubemap targets (via FramebufferTexture) then set gl_layer in your GS: good to go.

Heiko
09-05-2009, 12:21 PM
Great thanks for clearing that up. I definately gonna play a bit with that soon (as my Radeon HD4870's drivers finally support the geometry shader).