Sorry, I don't think I explained it very well. I was pondering on the effect of the backhaul and other types of traffic for other users. When I said networks I meant network operators & service providers.
I know things will have scaled up since but I was using the BE* example because it was simple and when they started up, some of the satellite exchanges only had a 100Mbps backhaul. After a while you may get users seeing congestion and speeds drop to 5Mbps if a lot of those users were constantly downloading. If everything was equal (and I know it never is because you also get bursty traffic), that means you have 20 users getting 5Mb per connection. So there's user A using his 5Mbps, but along comes user B who opens up p2p with multiple streams thus still able to get his full 24Mbps, but has the effect of pushing down the available backhaul bandwidth for other traffic (and users) even further.
The article implied that QUIC could be challenging to shape for network providers. I think I recall reading something somewhere else that if UDP (or the ports) were blocked then it can fall back to TCP. So whilst its only really google using it atm, but what's to stop other applications using it in future.
I suppose the only saving grace is that bandwidth isnt cheap on mobile networks and is quite often limited. So if someone invents a new p2p type file sharing system using QUIC, then your'e hardly likely to do so over a mobile connection.