Did anybody "seriously" compare compute shaders vs OpenCL?
How can someone "seriously" compare them when:

1: NVIDIA's Compute Shader implementation is barely 6 months old and hasn't been out of beta for more than a few months.

2: AMD doesn't even have a Compute Shader implementation yet.

In short, compute shaders are too immature at the present time to be "seriously" comparing them to anything. It's simply too soon to answer most of your questions.

The other question is this: what exactly are you using them for?

Compute Shaders exist to do exactly one thing: make it more convenient to do GPU computations that directly support rendering operations. Before CS's, if you wanted to do instanced rendering with frustum culling, you had to use Geometry Shaders via some hacky methods. This involved a pointless vertex shader, as well as working around the semantics of GS output streams and other such things.

Now, you can just use a CS (hardware and drivers willing).

Compute Shaders are not a replacement for all uses of OpenCL. If all you're doing with your compute tasks is to read the data back on the CPU, stick with OpenCL. Compute Shaders are primarily intended to help rendering operations. They exist so that you don't have to do expensive context switches and such that OpenCL/GL interop requires. If your compute tasks aren't about rendering operations, then you shouldn't be using CS's.

Simply saying that you lost 20% performance "in my case" really isn't enough information to know if that's reasonable or not.