This is a ramble type post - as Im interested too... and as I start it, god knows where it will end up
. These are just my random thoughts as I go through a few things in my head.. and its a total "blurb and type" sort of post... so shouldnt be seen as 100% factual, nor may it make sense
-------------
I will fully agree that something like MPPC/PPP deflate had a huge place to play in the days of dial up and could very possibly have decreased the amount of bandwidth required to transmit files. But over the past 10 years or so development seems to have been focussed on compressing and optimising everything for the internet in a specific file format. Therefore even if you used MPPC on a .swf file you wont see any benefit... but you would if you were transferring a load of .txt files.
I dont think you would ever get near a 200-400% improvement with MPCC on adsl. At slower speeds the savings seem much larger and more beneficial and on top of that most content itself is now compressed and optimised for the internet.
I know onspeed isnt MPCC - but one of the things that does and has always irked me about its claims is that its up to.. and in reality it doesn't work on todays faster adsl connections... nor does it/seldom achieve its headline claims particularly when so much content is now pre-compressed.
Network/Link/Transport layer compression.Re Header compression which is performed at the network link layer, There is already a compression method aimed at adsl and ATM cells - ATM Header compression
The invention provides an ATM switch having a compression module for compressing ATM cell headers without affecting the virtual circuit established for the call.http://www.freepatentsonline.com/6111871.htmlI'm not sure on this, but I think DMT also my also use some sort of basic DCT (Discrete Cosine Transform) type algorithm/method of data compression. Something else - Unless Im dreaming, I seem to recall reading once about how BTw could effectively also treat different types of traffic differently on their network (no Im not talking the traffic shaping/throttling/priority element here) to ensure that further delay isnt added to time sensitive traffic such as VoIP - which would not work well if any further attempts were made to compress this type of data.
Another area to research if you have time (sorry I dont) is L2TP compression methods. BTw use L2TP tunnelling for presentation to IPStream ISPs.
... and whilst it doesnt really fit in here... other advances have been made in adsl technology which enable faster speeds. The limitation in adsl1 of 8128 kbps is due to the overheads for
RS encoding and the maximum word size. By combining 2 RS code words into one using something called s=1/2 mode, we now in the land of 12Mbps. (This is how some interleaved lines may report a sync speed of more than the theoretical max of 7616kbps).
Platform dependant Compression.As regards to MSCC, I had a quick google and did find some mention of it being PCU intensive on high speed connections, and also that it increases latency due to the time required to compress/decompress.
MSCC only works on windows based systems (although there are linux plug ins) but
PPP Deflate seems to be the choice for other systems.
Advancements have been made in platforms specifically aimed at delivering content to end users such as Thompson's
Cobra +
Triple Play ContentAs mentioned above, I didnt research into compression at protocol level, so I wont pretend to know anything much about in that particular area... most of the real benefits of compression now seem to occur at Application Layer which is often where most of the real benefits are seen today.
Application Layer/ File Compression.As the internet has become more popular, more efficient algorithms are being used on the actual data type specific data and file types. Files which are stored on our PCs are often in the format efficient for transferring the data over the internet. .jpgs have replaced .bmp, MP3s and other MPEG formats replaced the midi files and many new video formats are around which compress to smaller file sizes to make them quicker to send over a network.
Specific algorithms that work best for the actual data type are applied to the medium eg VoIP, MPEG, IP-TV, Streaming Video etc etc are already compressed and optimised at source. Different algorithms will work best on the type of data that is being transmitted and application designers will know if the data type can afford lossy or lossless compression or how much loss (if any) can be applied to that applications data stream. If a new algorithm is invented that works best for a particular data type then you need to install a viewer application, or decompressor, or install a new codec.
Lets take Flash for example:-
Flash has seen various improvements in compression methods making Flash movies a popular choice for viewing online content. Each new version of Flash has better and more up to date compression methods... often based on, or combining different old and more traditional types of data compression.... and its why its often preferred by so many content sites today.
Theres not much point compressing flash files - try doing so and see what happens. Ive just chose 3 completely random .swf files from my internet cache and tried compressing them. File 1 when compressed increased by 112 bytes. File 2 when compresed increased by 172 bytes. File 3 stayed exactly the same size.
Obviously it will depend on the original file and how much the author has decided to compress them at time of construction. If the author has deliberately chosen less compression because they want better quality, then you could possibly compress it further. But if you try compressing a flash file that has already been optimised for the internet, then it will likely increase in file size due to the actual compression overheads.
Its more efficient to apply a specific algorithm best suited to the data type rather than 'across the board generic one size fits all solution'. Because VoIP/IP-TV/Flash/MPEGs/JPEGs/name-the-data-type have already been compressed during compilation, then adding additional compression will either bring little or no benefit and may even add to the delay.
By applying compression on the data type you are ensuring that it works on all platforms and not just depending on the network protocol in use.
In the world of todays high speeds and content types, I would imagine the only data that would benefit by using MSCC is things like text or bitmap...
... or possibly if you are setting up your own tunnelling system/VPN which encrypts data, then you could apply your own compression method relevant to the type of data transmitted... and at that point we are probably back in the land of L2TP type compression methods.
-------------
yee gawds that was a ramble - sorry if it dont make sense, but now Ive typed it all, I need to go get something to eat... so excuse the probably many trypos etc.