I wonder where the difference between theoretical and practical texture upload rate comes from. On Nvidia's 6800u I get approximately 1GB/s if the driver has a good day ;-) and the format and texture size is not evil.
Consider that AGP 8x has a peak transfer rate of 2GB/s, PCI-Express has peak 4GB/s (correct me if I'm wrong)
Where does the difference come from? This is a factor of 2 or 4. Often the factor is 8 or more, that's nearly an order of magnitude lower than peak. Some might say, 1GB/s, OMG, that's hell of a lot, but consider that we could need that for streaming or image processing to be able to use GPUs successfully.