I was just thinking, my 2.5 Mbps downstream per line is roughly fifty times faster than my dialup in 2003. Fifty times! That’s incredible, such a phenomenal increase, yet still I’m actually moaning and wanting much faster speeds. I’m sure some of our number are on at least 50 Mbps downstream with FTTx and are moaning a little bit. So given that 50 Mbps is twenty times faster downstream than one of my lines, another incredible difference.
How did we get to this situation? Let’s disregard TV for the moment because in fact we don’t need TV to be delivered over the internet, it’s just that the on-demand thing is extremely attractive, but for the purposes of argument let’s put that aside for a moment. Why are we getting uploads and downloads that are so huge that the transfer time is a pain in the backside ? We managed somehow in the days of dialup and I manage somehow to do most things even with my 7.8 Mbps downstream + 1 Mbps upstream. One thing I sometimes can’t do at all is have both Janet and I myself watching streaming video at the same time; sometimes that works, sometimes not, it’s marginal. Consumes around 3 Mbps as measured by looking at clueless.aa.net.uk CQM graphs of live streaming sessions. But other than that we actually do manage to do most things, it’s just slow sometimes. Upstream is a real pain; doing backups is awful, can take 10-30 mins to back up an iPad to the Apple cloud.
How did files get so huge? Streaming video is a new thing, huge by its very nature, despite enormous compression still huge. In the early 1990s we just didn’t do streaming video, couldn’t it over dialup apart from very small windows with low x times y and poor quality. I used to occasionally watch the TV program Big Brother’s live streaming feed over dialup, in a really small window and with terrible quality due to truly vicious, massive compression. I say decent video was a new thing because it was a dead loss in the early 1990s so proper video counts almost as a new data type now.
The other thing that has changed is the arrival of multiple users per dwelling. Businesses were of course always multi-user, but in the 1990s a lot of domestic dialup internet connections were just into the one computer so one user only, not into a LAN with multiple active users. Now the requirement in the case of residential domestic connections is to accommodate multi-user households, with say two to four or even five users in some situations. That’s why G.993.2 / FTTC isn’t fast enough, but I don’t know how/why, given that 50 Mbps is twenty times faster downstream than one of my lines.
Is some of this because software developers and web developers have become lazy and have got used to thinking that huge data bloat is normal? My brethren software developers have been inflicted by madness with instances of bloat that is beyond comprehension. I recently saw a modem’s config file that was all in XML and had numbers in ASCII text. The XML tags were quite unnecessary; as an example, could have made each number into a byte or word and could just could have just packed them into a C struct. I’m aware of the problems associated with this but it’s worth dealing with them imo. There are vast improvements in speed associated with not being lazy as well: no parsing, no decimal ASCII conversion, massively reduced i/o time, all good. But anyway, it’s easy to see five or ten times bloat factors, maybe more. Could do like Microsoft did with the recent office XML ZIPped up document formats, but then you still have the processing time of compressing/decompressing and the large time wasted in XML parsing when reading files. Anyway, I think it’s complete madness and I would be ashamed about releasing software that is associated with huge data file bloat in the case of non-trivial files. Web development - don’t get me started. The convenience of being able to edit files with plain text editors because they are in ASCII is ridiculous: could just supply a text-to-binary io format converter program and deal with the convenience issue that way.
Anyway, in general, how did we get to where we are? What’s going to happen in the future with file sizes, data types and use-cases, and what will happen with possible demand for ever faster speeds?