Kitz Forum

Broadband Related => Broadband Technology => Topic started by: broadstairs on July 28, 2016, 09:30:13 AM

Title: How does banding work?
Post by: broadstairs on July 28, 2016, 09:30:13 AM
It would seem like my line has been banded after all my recent troubles so I was wondering how does this work? It would seem that my attainable and actual d/s speeds are pretty much the same, although my u/s is 20000kbps which is the max anyway. I am also puzzled as to why banding might be applied to a line especially when the banded d/s is not the side which was actually causing all the trouble? Also any ideas as to how this might be removed - is it a time period or other conditions?

Stuart
Title: Re: How does banding work?
Post by: WWWombat on July 28, 2016, 11:28:47 AM
Every line has a line profile, and one aspect of the profile is the "speed limit"; with a min/max value separately for upstream and downstream. When the modems negotiate a sync speed, the speed chosen must be within these limits.

The default "speed limit" is one that allows for the full range of speeds from 0.128Mbps up to the maximum allocated to the package purchased from Openreach. This is an "open profile".

"Banding" just means that a "speed limit" gets applied to the line profile (by DLM) that is something other than the default. It usually comes with "maximum limit" that is lower than the package and (according to BT documentation) ought to come with a minimum that is above 0.128.

Once banded, any resync is likely to result in a sync speed that is artifically low. That would mean the line runs with an SNRM above the standard target of 6dB and, that being the case, would tend to run with an attainable that is higher than actual sync. However - that all assumes the incapacitating noise is not present at that time.

The old theory was that "banding" was applied only to the most unstable lines; a lower "maximum speed" value helped to make the line more stable when noise was present; the "minimum speed" value helped ensure the line didn't sync at too low a speed when noise was present. I have to say, I'm not sure I've ever really seen this "minimum" have that effect, so most commentary concentrates on the effect of just the "maximum" limit.

In this old theory, once DLM has put banding in place, then it would continue monitoring. If the line continued to be unstable, then the banding screw would be tightened - and a lower maximum put in place. For the theory to work out in practice, it requires the instability to be caused by noise that is semi-predictable - that cannot be controlled, but when present, is of a known magnitude.

Your line doesn't meet this requirement; the instability is caused by a fault that is unpredictable, and the effect is untameable. When DLM sees all your instability, it attempts to mitigate things by taking its standard actions - but they will never be enough to work around this fault. Hence the appearance of scrabbling around in the dark.

DLM works separately on the upstream and downstream, measuring errors independently, and taking action (including banding) separately.

On an unstable line, DLM would expect to see a lot of errors, which would give it a clue as to whether the instability is really based on noise on the upstream or the downstream. However, when your line goes doolally, it "just" resyncs to a tiny speed. It probably doesn't give DLM enough error information to help it figure out whether the problem is up or down ... so, sometimes, the decisions appear arbitrary. DLM just isn't designed to fix/mitigate problems like yours.

In the old theory, the improvement of errors would result in DLM de-intervening - by gradually increasing the banded speed gradually or removing it entirely.

Note my use of "old theory"? That is because DLM seems to work differently now, sometimes. It has been seen to use banding as a tactic of first choice, rather than last choice. And it seems to be sticky, being very hard to get rid of. This new-style of behaviour is less predictable, and doesn't (yet) make sense to me.

Right now, your line seems to follow the "old theory" more. However, DLM still makes a pig's ear of it, because your fault isn't fixable with DLM.