I think it's common knowledge nowadays that all consumer cards store the whole mipmapping chain in video memory as soon as the texture is used at all. In other words, if you've got a 4096x4096x4 mipmapped texture and only an area of 64x64 pixels is visible on screen, the whole 80 Mb of data will be uploaded into video memory, even though the rasterizer will only need to access 16 Kb of these. I know in practise there is other considerations (trilinear filtering needs access to the closest mipmap chains), but why the hell can't the drivers manage this memory better ? The amount of textures you could have in your scenes would increase tremendously, and even performance should increase (no need to upload all this data from system memory if it's not yet in video memory). So is there a specific reason why this isn't possible ?