How often is a fragment shader run?

I was wondering how often you can say a fragment shader is run…
Can you say that only 1 fragment shader is called for each pixel on the screen? Or is it run for each pixel in each triangle that is drawn? Or is it something in between and does it depend on the order of drawing the triangles and how the zbuffer is filled?

Or is it none of the above?

Thanks!

It depends on many factors: order of drawing, whether FS changes depth of the fragment or not, orientation of the primitive, backface culling, clipping, etc.

“Conceptually” it can be called “once per pixel” for every triangle you render (if you’re doing standard rasterization). If you enable per-sample rasterization (supersampling), it can be called “once per pixel subsample” for every triangle instead.

“In practice”, there are pipeline features and optimizations that may cause it not to be run for absolutely every single one of these pixels (or samples). For instance, if you’ve got backface culling enabled and the triangle is backfacing. Or if it’s occluded and you’ve got Z-buffer testing enabled. etcetc.

So your “each pixel in each triangle that is drawn” is close, but “it depend on the order of drawing the triangles and how the zbuffer is filled” is correct as well.

Thanks! I was thinking that if is was only called for each pixels on the screen then it wouldn’t depend much on the number of triangles drawn, but I’ll assume it does depend on that now for speed considerations.

Actually the fragment shader will be run also for some pixels outside the rendered triangle, because at least 2x2 pixel blocks are needed to compute derivates.

what derivatives are you referring to, what are they used for (just out of curiosity) ?

mbentrup is referring to the derivatives the pipeline implicitly computes on texture coordinates you use to sample textures in order to determine which MIPmaps to sample from and if/how to apply anisotropic texture filtering. You can also have it explicitly compute derivatives of your own user-defined functions with dFdx, dFdy, and fwidth in GLSL.

But these “hidden fragments” outside the boundary of your triangles is a nuance you rarely need to care about. The main take-aways are that there’s some inefficiency in the GPU pipeline in rendering lots of very, very tiny triangles (that are on the order of a pixel or two wide), and also that you need to ensure continuity for texture coordinate values across neighboring pixels when using texture() so the driver can “do the right thing” when sampling and filtering the texture.

Thanks, this is helpful in several ways. I used dfDx and dfDy in the frament shader to compute the triangle normals, but now I also understand where these functions come from. Also the advice on using continuing texture coordinates makes sense, thanks!