OpenGL Performance is rather poor

I’m running OpenGL 3.3 on Windows 10, GTX 960
Test program written in C++
I’m using VAOs and VBOs

I’ve noticed my program renders any number of triangles more than about 27 million at under 60 FPS

I’m not using any FBO effects, and I’m using fairly efficient shaders, small textures, and a fairly optimized rendering function

Why am I getting such a poor framerate?

I’m pretty sure the nintendo wii could render more triangles every frame at 60 fps

and this computer is, well, not the best around but certainly no potato. i7 6700 and gtx 960

Triangle count isn’t the only performance metric (and isn’t even much of a meaningful performance metric) and may be affected by other factors. If you want to measure performance, benchmark a real world application. That said…

How many VAOs?

How many VBOs?

How many draw calls?

That’s just for starters; you should also post your code so that people can determine if you’re doing anything else that may be causing inefficiencies.

I am using VAOs and VBOs.

I tested by loading in a mesh with a 30 million triangles and only rendering it (One VAO)

The only other thing I do is glClear and set some uniform values

I have tested without textures, no performance difference
without uniforms no performance difference
Etc.

Anyway the mesh is drawn at 54 FPS

which is absolutely pitiful for a modern GPU

So what is eating up the time?!?!

is there some way I can improve per-vertex operations?

is there some universal trick?

I use GLFW window hints to create an OpenGL 3.3 context btw

I use the default color configuration of GLFW for the window, with depth buffer

I don’t have multiple windows

I’m not using the framebuffer objects i’ve created

tips?

I tested by loading in a mesh with a 30 million triangles and only rendering it (One VAO)

Anyway the mesh is drawn at 54 FPS

which is absolutely pitiful for a modern GPU

Nonsense. 30M triangles at 54 FPS is 1.6billion triangles per second. That’s really good, and entirely reasonable for a GeForce 960. For comparison’s sake, the GTX 960 has a clock-speed of ~1.1GHz, so you’re rendering more than one triangle per cycle.

Oh, and FYI: that’s far in excess of what the Wii could do.

There is no performance problem here; only unreasonable expectations on your part.

Now I feel dumb