This suggestion has some associate design difficulties due to the way in which usual isolation in layering of communications architectures (I require a broader term than just ‘protocol stacks’) which frustrates coordinated operation between sometimes distant well-separated layers.
Say you have a ‘non retransmitted’ packet sent by an application, using for example UDP. Another example might be a DNS lookup query or response. Now in contrast TCP data payloads will be resent if lost, the former examples are not.
I wonder would be worth exploring an architecture that used multiple channels by exploiting the availability of multiple DSL latency paths in order to get variable reliability. If I have understood correctly, although this mechanism is intended to provide variable latency by using variable interleave depth, I think you could abuse it to apply variable amounts of FEC and vary other parameters giving far higher reliability on a selective basis.
Examples would be where a DNS-related packet, or something involving UDP that is important, possibly short and only sent as a one-off, not as part of a flow, or even each TCP ACK are all sent in the high-reliability, high-overhead channel.
I don't know what the worth of such a scheme might be. Looked at negatively you could take such a system as a way to push speed harder and harder even to the point where reliability is reduced in situations where retransmissions and eg TCP could fix slightly bellow-par reliability whilst still giving superb reliability for things that really matter and which are exposed.
To achieve this, there would need to either be (1) a kind of side channel where higher layers send down per-packet reliability channel markers, or (2) there would have to be a pattern-matching capability that one or possibly many layers could implement where a higher layer tells lower layers how to recognise SDUs and classify them, or a mixture of the two approaches. The pattern-match thing could be a kludge, prone to miss things, or inflexible or all three. It could also be really awkward to implement as patterns might get variable and messy. It seems to me that the mixed approach (1+2) is the only route to success. Since option 1 has backwards compatibility problems, there is an alternative to 1, using in-band signalling if suitable extra data can be added in an SDU then recognised and removed by the lower layer. There might be cases where this is really problematic, it's something I would need to think about more.
One way of using this would be to have standing orders, where lower layers pattern match and categorise certain types of data, such as "all UDP", or "DNS messages" or "TCP acks" and-lace them into different high reliability channels/categories. This is the kind of thing that is sometimes done for QoS. It avoids the situation where you can make no improvement to the whole system because apps can't all be rewritten to think about such things.
It seems to me that this would definitely not pay off if the cost of retransmissions in say TCP is very high. If a packet gets corrupted and TCP reacts badly then the pushing things harder to compromise some reliability thing would not pay off because the extra raw speed gained would be wiped out by the performance loss when recovering and retransmitting. The only way to answer the question would be to try it.
However, the variable reliability thing might have other worth aside from just trying to get more speed.