Currently transcoding and dynamic texture compression looks like this (assuming compression in fragment shader):
-you should create storage stg1 for compressed texture
-you should create same-size storage stg2 with some uncompressed format that compatible with stg1 for copying (table 18.4 in glspec44)
-your fragment shader is responsible for assembling correct compressed blocks and will output them into stg2
-you should copy data from stg2 to stg1
After that you can read from stg1 and enjoy hw decompression.

But for fast algorithms (like in article "Real-Time DXT Compression" by J.M.P. van Waveren) compressing time is comparable with copying time or could be even less. So it would be awesome for some apps to eliminate that unnecessary copying step.

One way to achieve this is to allow creating renderable views for compressed textures. For example RGBA16UI view for COMPRESSED_RGB_S3TC_DXT1_EXT texture. It would allow us this path:
-creating compressed stg1
-creating view vw1 with uncompressed renderable format
-bind vw1 to framebuffer and run our shader
Now read from stg1 and be happy.

Of course there is some issues like different resolution for stg1 and vw1. But I don't see any insoluble problems.