Nearest-Depth Upsampling

Hello everyone.

I’m looking for ways to optimize my particle rendering system.
I implemented low resolution off screen particles explained in this article :

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch23.html

The basic concept is to render the particles in a downsized framebuffer and compose the resulting color texture with the main scene color texture.

My pipeline looks like this :

| Main Scene |----depth texture—>| Low Res Particles |----color texture---->| Composition

The depth test is done in the low res particle rendering pass by comparing the scene depth sampled in the input texture with the current fragment depth.
This technique inevitably creates blocky artifacts where opaque geometry is in front of the particles.

I am now looking for a way to get rid of these artifacts. I found an Nvidia’s article explaining a technique called “Nearest-Depth Upsampling” that seems really efficient:

https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/sdk/11/OpacityMappingSDKWhitePaper.pdf

But I don’t understand this method… English is not my native language so I guess it doesn’t help but I can’t figure out what those sentences mean :

The nearest-depth upsampling filter fetches the 2x2 low-resolution depths in the
bilinear footprint of the current full-resolution pixel and compares these 4 depths
with the full-resolution depth of the current pixel. Then the filter computes which
of these four low-resolution depths is nearest to the full-resolution depth and
returns the corresponding low-resolution color for that sample. The nearest-depth filter can reconstruct high-quality edges if the
resolution of the low-resolution rendering pass is high enough to capture the
opaque-geometry features

Can someone explain it to me in other words or with more details ?

Thank you.