S3TC / DXTC texture compression

Can someone explain how video cards actually handle the rendering of compressed textures?

Is each texture uncompressed into VRAM during rendering, added to the scene and then freed for the next texture to be uncompressed into?

Or are textures left in compressed form in VRAM and decoded and added to the scene without uncompressing to VRAM first?

Obviously they all stay in compressed form in VRAM but how does the compressed texture get rendered?
If I fill up VRAM with compressed textures will the card need free VRAM left over to work with?
i.e. Must I leave some free VRAM open which is the size of the largest texture in uncompressed format?

BTW : Which technologies are currently active?
I see FXT, S3TC, DXTC plus a few others.
I see DXTC and S3TC used interchangably but I thought DXTC was a DirectX implementation.

Thanks
Paul

Obviously they all stay in compressed form in VRAM but how does the compressed texture get rendered?
That’s not necessarily true. Drivers are allowed to decompress the texture on upload, thus saving you nothing. You should not assume anything about the underlying implementation.

However, in practical terms, most modern hardware can handle compressed formats internally just fine. I wouldn’t suggest trying to second guess them, or try to count video memory bytes. You’ll probably be wrong anyway.

I see DXTC and S3TC used interchangably but I thought DXTC was a DirectX implementation.
DXTC and S3TC are the same thing. I think Microsoft bought the name or something. I don’t remember how it all worked out.

In any case, the DXTC name is stronger, because the specific kinds of compressed formats are named based on that name (DXT1, 3, 5, etc).

The compressed video formats are stored compressed in video RAM. My guess is that when the hardware fetches a texture block of pixels to use for texturing, some hardware in the texture cache fill circuitry reads the 64 bits of DXT1 data and expands it to whatever format the texture cache line actually wants.