PDA

View Full Version : Still have problem to understand fragment shader



Ehsan Kamrani
03-14-2010, 09:00 PM
I have still problem with fragment shader :(
The subject is complex and I need to know what stages are involved in the rasterization process .
Assuming that we have 2 triangles and one of them is in front of the other, What stages are involved in the raster level?For example, does it convert the first triangle to 40 pixels and the second one to 60 pixels ?(values are just optional for my example) We know that z buffer is done after fragment shader, so I guess these triangles are still available after raster level?Does rasterization processes all the primitives at once and sends them to the fragment shader or does it convert each primitive( for example that triangle) to pixels and sends these pixels to the fragment shader, then processes the next one and sends it to the fragment shader, etc ?
Assuming that a triangle consists of 40 pixels after raster level, does it mean that the fragment shader processes all these 40 pixels?

dorbie
03-14-2010, 10:34 PM
In theory all fragments are processed, in practice, hardware may perform early z rejection to eleiminate the work if the pipeline raster operation state is in a condition that meets optimization criteria (and now shader with features like discard being an issue for early z writes).

This will optimize if something in front is drawn first.

There is also coarse z buffer that might further eliminate some of the zbuffer/rasterization/interpolation work before the shader.

Some architectures implement deferred shader execution so even if you draw back to front it will still only shade visible fragments.

Some rendering engines like Doom3 have filled the zbuffer for features like stencil shadow testing and have then benefitted from only shading the visible fragments because of early hardware z rejection.

So fragments shaders only determine the color of the pixel, but hardware is not constrained to obey the OpenGL raster pipeline if they can devise optimnizations that are visually equivalent, and every hardware maker has their own bag of tricks to optimize for this stuff.

How much is optimized and how aggressively and how broadly they work varies by maker, HW generation and perhaps even driver. Pretty much all cards today do early z, where you might see variation is in how much of a hit they take when you enable alpha test early in the scene, etc.