What are the other solutions that are better?
1 - Ignore the conditions that triggered DLM and dont apply any stability action, granted this would probably be fine for many cases where we get DLM reacting to short lived events, but there would also be cases where lines have excessive dropouts and/or errors causing things to break.
2 - Apply interleaving, this reduces speed "and" increases latency so a bigger evil than banding.
3 - Apply banding, reduces speed but keeps existing latency.
Of course I have posted in the past of what I think should be happening, but it will never happen due to the cost constraints placed on openreach.
In an ideal world DLM would always be treated as a temporary bandage, and not a long term solution, however this would skyrocket costs to openreach treating every line that requires interleaving as faulty until a physical fix is applied, and in the current regulated environment isnt viable.
Thinking about this some more, speedtests need to become more relevant to reality. Most just test the burst speed, some now also test bufferbloat which is a step in the right direction, but I think tcp session creation time and dns lookup speeds need to be tested as well, both of these will primary respond to better latency rather than better burst speed, as burst speed is only part of what makes things fast, not everything.