I’m just messing about with 2D stuff at the moment, but I was wanting to do some point sprites in the geometry shader. I want to use a sprites position, size, scale, and rotation in the geometry shader to generate the textured triangle pair needed to display the sprite.
Now, my delima is that my ortho-like matrix in my vertex shader modifies my sprite vertex values to be within screen values (-1,1) and then we go through the geometry shader. This means that my geometry shader is going to have to generate everything using those values…
Now, I could just hold the projection matrix multiplication till the geometry shader, but then my vertex shader isn’t doing anything…
So, does anyone have any thoughts or insight as to what I can/should do? Are geometry shaders slower than the same in vertex? What is the point in vertex shaders if all the same can be done in the geometry shader? Why didn’t they just expand or replace vertex shaders with geometry ones?
It just seems like if I do do projection transformations in the vertex shader, I will have to do further ones on my sprite data in the geometry shader to match, or do further transformation on the sprite data in the vertex shader (though the exact details escape me) and pass it to the geometry shader… when I could just do the projection after everything else in the geometry shader and save the trouble… What am I forgetting?