Creating a histogram using rasterization?

Currently I’m creating a particle system and I would like to transfer most of the work to the GPU using OpenGL, for gaining experience and performance reasons. At the moment, there are multiple particles scattered through the space (these are currently still created on the CPU). I would more or less like to create a histogram of them. If I understand correctly, for this I would first translate all the particles from world coordinates to screen coordinates in a vertex shader. However, now I want to do the following:
[ATTACH=CONFIG]1763[/ATTACH]

So, for each pixel a hit count of how many particles are inside. Each particle will also have several properties (e.g. a colour) and I would like to sum them for every pixel (as shown in the lower-right corner). Since this is, if I understood correctly, almost what happens during the rasterization step, I was wondering if there was a way to use the rasterization to achieve this?

PS: I’m currently on MacOS, which apparently only supports up to OpenGL 4.1

Additive blending (i.e. glBlendFunc(GL_ONE,GL_ONE)) will sum colours. Depending upon the number of points which may fall within a pixel, you may need to use a format with more than 8 bits per component. Integer formats cannot be used with blending.

Stencil operations can be used to count the number of times a pixel is written to, but that will limit you to 8 bits (0-255).

Another option is atomic image operations (imageAtomicAdd() etc), but those require OpenGL 4.2 or the ARB_shader_image_load_store extension.

Ah thank you very much! A small test shows that this could work. But ok, what will happen then is basically the following, right?
-For every point, the fragment shader is called, which will return the color of the point
-This color is then added to the current value of the correct pixel (with the chosen blending mode), where I could use the RGB-values for the color, and for example add 1 to the alpha channel to keep count of the total amount of points
-I could then use an extra pass to scale the RGB values with the total number of points inside the pixel (and possibly do some other magic with it)

A few questions though:
-If I have a really skewed distribution (e.g. 80% of all the points in 100 pixels or something), will this give performance issues as they all have to write to the same pixel?
-Where are points outside of the window discarded? Will OpenGL discard them before the fragment shader is called on them, or later during the scissor test of the per-sample processing?
-Why can integer formats not be used for blending? Does that mean I can only use 8-bit or floats for blending?
-Are there some other performance related considerations I have to take into account?

Maybe.

The pixel ownership test (whether the fragment lies within the window) and the scissor test are performed prior to the fragment shader being invoked. But points which lie outside the window will probably be rejected after the vertex shader due to viewport culling.

Blending is only meaningful for real numbers. Floats and normalised values are meant to approximate real numbers, integers aren’t.

You can have normalised values with up to 16 bits per component (e.g. GL_RGBA16). 32-bit components are only available as integers or floats.