Colin ok the 3 DLM profiles are seperate to the DSLAM profiles.
DLM is not run on a DSLAM, it just controls DSLAMS.
As you said regardless of DLM profile everyone starts off the same way, the reason is these profiles don't affect how you start, they just affect how much instability is required for you to move to a lower profile.
So eg. on speed it might require 1000 CRC errors a minute, standard might be 600 a minute and stable might be 300 a minute.
Yes, thank you. This is paraphrasing how it was described in BT's original DLM patents for
ADSL - EP 2 169 980 A1 & EP 2 342 902 B1 (original source 7LM) - and was what I was citing when I referred to
how DLM might monitor and subsequently react to line conditions.
These are referred to as
Stability Policies, as discussed at file 1 below, and is, as I said, IMO the source of these 3
profiles.
The actual metrics suggested and described therein are at file 2 below, and an overview of the original DLM algorithm at file 3 below.
What appears to be different in the VDSL algorithm is that these have to be mapped down to the selection amongst & application of (one of) the available set of line profiles in the DSLAM, each of which in themselves embodies (to use the terminology of the original patent) a set of parameters whose selection, as Kitz has referred to, adopt a
distinctly different (so-called banded) approach to that contemplated (and used) by the original ADSL DLM(s).
What you have partially described are what might
trigger a profile change, not what profile (and so parameters) are selected, or why, or how these parameters are related to a putative set of stability policies.
It is likely that there are multiple profiles with the same banding, but with different parameters, e.g. INP and delay, which could be selected between on the basis of these stability policies. However, it would be nice to see some
evidence that e.g. someone on a so-called Speed (stability) profile encountering line issues is simply moved to a lower speed banded profile, but one that does not use INP (because of the delay introduced).
It could be argued that when issues on my line were caused by excessive forced resyncs, it behaved in that way by simply applying a lower
banded profile i.e.
as if I was on a Speed profile; but when subsequently issues arose which were the result of excessive error-rates instead, it behaved otherwise, in selecting a profile in the same band, but with INP and delay instead.
So, to be
advocate again, what is differentiating the approach here - the nature of the trigger, or the 'applied' stability policy (which must have been selected, or defaulted, by the ISP, and remained unchanged)?