# Thread: Is there a limit of total output from geometry shaders?

1. ## Is there a limit of total output from geometry shaders?

Hi people, I am a beginner in developing games.
I needed some kind of amplification of the amount of primitives for the game, so the geometry shader is perfect for this purpose. But when I run my program, I can see that the geometry shader is doing a good job amplifying the primitives, but it is very glitchy, all the time there are lines, triangles, etc appearing and disappearing randomly. But when I draw less primitives than a certain limit, it is rendered without any glitch, as soon as I add one triangle above the limit, it starts with a slight glitch. When I set the geometry shader to take one primitive (triangle) and divide it to 25 triangles (75 vertex), the maximum amount of triangles that it can manage well is 40 triangles. And when I set it to divide it to 100 triangles (300 vertex), the limit is 10 triangles. And 75*40 = 300*10 = 3000, the program fails when it reaches that amount of vertex so I thought it had to do with a total limit that the geometry shader can output, from all the primitives processed. I got to know that there is indeed an output limit, but as I understood, it is the limit that the geometry shader can output per each primitive, and those numbers above mentioned are far below the limits. The program even fails if I do a simple pass through of the vertex, without any amplification, if there is a large amount of primitives being rendered. So for that reason I thought that there is a limitation of the total vertex processed or something like that. When I try to run the program on a low-resources laptop, the driver says that the shader is using too many registers (125) I don't know if that information could clarify the problem.
Sorry for my English.

2. I got to know that there is indeed an output limit, but as I understood, it is the limit that the geometry shader can output per each primitive
The limit is for each GS invocation, or each input primitive. Not on the basis of output primitives. So if you're writing 75 vertices from an invocation, and the maximum number of output components is 1024, each vertex can only contain ~13 output components.

The program even fails if I do a simple pass through of the vertex, without any amplification, if there is a large amount of primitives being rendered.
That sounds like more of a driver bug.

3. Originally Posted by Alfonse Reinheart
The limit is for each GS invocation, or each input primitive. Not on the basis of output primitives. So if you're writing 75 vertices from an invocation, and the maximum number of output components is 1024, each vertex can only contain ~13 output components.

The output is a vec4, a float and a vec3 (8 components), so it should work with 25 triangles. But with 100 triangles you can still see that they were well drawn when there are 10 or less input triangles. it could be that it is a driver problem. But I am using the Mesa drivers, and there are always updates of them, so I don't know.

4. My program is too complex, so I made this little version that I think has the same problem, there is a constant random failure with the triangles. It is just a program that makes many triangles in a square
of N*M, so there are N*M triangles, and the geometry shader is just a pass through. If you set let's say N = 5, M = 5, there is no problem, the triangles are well rendered. But when you try 50*50, there is a glitch, and it becomes worse if you increase the numbers.
I uploaded a txt file that you have to change to .c, and compile with "gcc 5.c -lGL -lglut -lGLEW".
Thank you again!

5. Originally Posted by agustin
there is a constant random failure with the triangles.
It is just a program that makes many triangles in a square of N*M, ...and the geometry shader is just a pass through.
If you set let's say N = 5, M = 5, there is no problem, the triangles are well rendered.
But when you try 50*50, there is a glitch, and it becomes worse if you increase the numbers.
Which GPU and GL drivers are you using?

I compiled and ran this on a NVidia GPU under Win7, and it appeared to work fine for N and M values of 5, 50, 200, 500, 1000.

However, you should start Checking for GL Errors in your code. You are triggering a GL error on your first run of RenderSceneCB() with your glUniform1f() call. There is no program bound at this point, so this call trips a GL_INVALID_OPERATION. It's possible this might have something to do with the misbehavior you're seeing there.

6. I am using a AMD r7 260x, and this is what glxinfo says: OpenGL version string: 4.4 (Compatibility Profile) Mesa 18.2.2 , okay, I thought there were no errors because the program "worked", but I will check for them, the glUniform should be always after the program bound then?

7. Now there is no problem, I think what happened is that the problem was solved because of a driver update. I disabled and enabled the glUniform call and it works fine in both cases, but I will try to not make that call anywhere in the program. Thank you very much!

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•