Bonjour,
Before asking anything, please let me explain the context.
I’m currently working on a video game project, in which I need to draw a lot of squares to render a minecraft-like terrain made of cubes, something between 100 000 and 500 000 per frame.
Of course, i’m culling a big part of it, so I could not exactly say how many are drawn, but we could assume that it’s still a lot.
As the game should be cross-platform, I’m working most of the time on an old Dell computer, on Ubuntu, with a NVIDIA chipset. NVIDIA drivers are installed. We are using GL 3.2.
So, my solution is to build multiple VBAs, each containing a part of the terrain geometry. Until recently, my arrays was containing triangles, 2 for each square, so 6 points per square.
I was drawing many GL_TRIANGLEs, and made them pass through a vertex and fragment shader.
Recently, I decided that 6 points was way too much for a square. So even if many told me not to, I tried to draw GL_QUADS. And I noticed an important increase of performance, going from 58-65 fps to
70-80.
Obviously, as GL_QUADS are depreacated since gl 3, my OSX friend told me: don’t use these.
Thus, I decided to try to create a very basic geometry shader, that would take LINE_ADJACENCY in and output TRIANGLE_STRIPs.
It’s a simple pass-through shader, no additional computation is made.
No, my fps caps to 60fps when I don’t move.
So my questions are:
- Does geometry shaders always drags performances down ?
- Is it possible that this performance issue is related to my old hardware ?
- Does some hardware actually doesn’t really support geometry shader, even with 3.2 drivers ?
- If some hardware lacks a good support of geometry shaders, is it possible to know it at runtime ?
- Is the conversion from LINE_ADJACENCY to TRIANGLE_STRIP costly, or is this irrevelant in term of performances ?
- Should I use indexes or another solution ? Or stick to triangles ?
Thanks !