Fragments - can't find a clear definition of it

Hi,

I do have some idea of what actually could be a fragment but I can’t seem to find a source which would confirm if my thoughts are right (neither wrong, i think).

What I do think on them is:

Fragments would be the pixel-sized division of each polygon, after rasterization. In such case, you could have many fragments at the same pixel-sized point.
Following that; after rasterization if, for example, depth test was enabled, it would test all fragments’s depth field and only just throw the one nearest to the viewer (lower Z?) to the real color buffer.
As for (standard) blending, it would scale the colors according to the alpha number.
However i came to think that, if so, a buffer to contain fragments would have to be somewhat dynamically sized?
Plus, definitions such as Wikipedia’s say a fragment is all the data needed to generate a pixel. But, to my vision, i could have many fragments to generate a pixel. Actually, the pixel color would be the combination of many fragments (and not one!) and using user defined settings to manage them.

Could someone clear this confuse story a bit?

Thank you! :slight_smile:

Well what you describe shows a fairly good understanding of fragments.

Although a fragment as you seem to see it from wikipedia’s definition is kind of a theoretical object as it is created from input data, geometry, and algorithms. It’s not a packet of info that you can interrogate or store in that sense, at least from my perspective on this end of the OpenGL Black Box.

The ‘buffer’ that contains the “completed fragments” is the Frame Buffer, or where ever the fragment shader or Fixed Function pipeline is writing to. It’s not “dynamically sized” as fragments are processed and drawn in serial on each pass through the pipeline, and then written to their screen position in the current Frame Buffer.

This is why a lot of shader effects will write to different Frame Buffer Objects, and then those results are fed back into shaders as textures for multi-pass rendering and so forth…

Hey,

thanks for the quick reply! :slight_smile:

So, what you mean is that not every fragment is stored in memory simultaneously? Instead, they would pass and be checked serially, being discarded or not at the check time?
Like, only one fragment being stored per pixel in the screen on that buffer. So if i had many triangles on a contest for the same pixel, the fragments would arrive one by one and according to a check I would decide whether the one there stays or the new one takes the place?

If that’s so, then I think I get it. For such case, if I wanted to store more than one fragment at the time to run some sort of algorithm over it, I would have them to write to several different FBO and then write code (shader?) to manage them?

Basically, yes.

Of course you have what is already in the main Frame Buffer, and that is what blending is done with, so for fairly simple linear combinations you can just keep on blending into the FrameBuffer over and over again.

fragment is a candidate to become a real pixel on the screen if not discarded by any test (depth, scissor, …).

Yeah, …OR the fragment is “combined” with the one there on the framebuffer (via GL blend functions).

You’ve got the idea. Re your “per pixel” mentions, don’t fix in your mind that a fragment flying down the pipeline has to be pixel sized. When a supersampling antialiasing mode is enabled (i.e. multiple shading samples per pixel; i.e. multiple fragment shader executions for different pieces of a pixel), fragments can be “subpixel” sized too.

But for the basic case where you don’t have any framebuffer antialiasing enabled (one framebuffer shading sample per pixel), then your fragments are conceptually pixel-sized. This is almost true for multisample and coverage sample antialiasing too (depending on a subtlety you probably don’t care about).

Fragments are the per pixel values that are inputs or intermediate results prior to the final output pixel color, this includes raster interpolated colors, texture fetch results, combiner outputs.

w.r.t. supersampling that’s rare and multisampling is the norm, these schemes execute a single shader per pixel even if the color & depth resolve sometimes have multiple samples so at least at the shader level fragmets are per pixel values.

Thank you all for your answers.

I shall be advancing in shaders soon, and this information is going to be very valuable. I feel confortable with it now.

Hope this thread can also clarify it for future people on the same boat, as there didn’t seem to be such a quickly googleable clear definition of it.