Thing is, I thought a lot of web page load time things were all about bad web server config and bad webpages that are caching-unfriendly, with no correct cache control headers, no etags, date stamps or even no-cache directives all help to slow things down. Lots of DNS lookups are another example. Even though pipes have got wider, some of these crapness factors have not changed since the days of dial-up. The modern trend of having dynamic, generated webpages for no earthly good reason doesn't help either. Web servers up to be generating pages than storing them then serving up the results are static pages unless they actually need to be changed and recalculated it's possible to do this by putting proxy caches in front in front of the origin server or by using caching modules inside programming environments.
So many websites are painfully slow simply because sysadmins are bought cheap and server software stacks are badly configure or outdated. I've seen a number of shockers recently where using the back button within the site is completely painful for no reason at all.
Since DSL links have got 10 times faster, for the haves at least, then latency due to interleave should have gone right down. But no wait, perhaps my logic is wrong, perhaps the length of time of an interleave period should have to remain the same so that it covers the time length of a noise burst and then some, regardless of what the data rate is?