The 'pro-activeness' you touch upon, ie: "Making DLM go deaf", is a bit closing the stable door after the horse has bolted. DLM will have done its deed already, ergo switching it off during faulting wouldn't really have much impact at all.
I don't think so - not in the kind of situation I described: for example, it becomes necessary for an engineer to work on
somebody else's line which is faulty - perhaps in the cabinet, or a joint box, or a DP - and in so doing disturbs (in any number of ways you can imagine) other people's lines. It would make a big difference to
those people if DLM was inhibited for all pairs in the DP, cable bundle, cabinet while such problems were investigated - particularly for major works. I think you are thinking of individual line faulting, rather than the potential collateral impact on other lines that occurs (you know it does) whilst fault finding takes place. As you have already said, it is very difficult to do otherwise.
Anyhow, my personal opinion about CSPs doing proactive maintenance, rather than reactive maintenance, is separate from the original question, should necessary reactive maintenance be allowed to have a (potentially long-lasting) collateral effect on other lines?
'Define 'deterioration', Colin ??
Certainly. In order to define deterioration, you would first of all have to define and agree a set of line characteristics, and measure them. Subsequently, deterioration would be a significant change in one or more of those characteristics that suggests that service on that line may be negatively impacted, or may result in a fault condition if not addressed.
What is significant will probably vary according to each individual characteristic of the line.
What might such characteristics be?
Well they could be anything that was useful for such purposes, and which may change over time, and may cause the deterioration I refered to. Such as:
Loop resistance
Line attenuation (e.g. Hlog data)
AC balance
TDR plots
etc
Even more exotic things such as the type and poundage of cable sections
Whatever they are, they would be measured at (whatever) intervals and recorded, and (predefined) significant changes used as thresholds for (dare I say it again) proactive maintenance.
It could be done, if the will (and no doubt the money) was there. However, I don't see much debate about
if it should be done.
As you also previously remarked, regular routine testing of lines used to be done, even when it was just a POTS. Now we have broadband, why does that appear to be less important?
If it's a lessening of DSL signal since 'switch on', then for the most part that will be down to cross-talk, or aggressive DLM behaviour due to lightning strikes, nearby road-works, Openreach engineering maintenance (joint remakes etc) ................. just because the speed has dropped it does not immediately follow there is a deterioration of our asset-based access network. Far from it in fact.
First of all, you should know I have no particular axe to grind about OR. That's why I have tried to carefully say CSPs. I don't know (because I don't use them), but I suspect, that similar things happen with other CSPs, e.g. Virgin's coax networks (they always seem to me to be in a terrible state, but that's another story). But if the cap fits, and all that. Nobody's perfect, and I'm sure you're not suggesting that OR has no room at all for improvement?
Next, DLM already takes (some) account of wide-area events impacting lines in the same cabinet, e.g. lightning strikes.
IMO Crosstalk, for the moment, is a convenient excuse for all sorts of problems that
may have nothing at all to do with crosstalk, but just as soon as OR rolls out vectoring, we can all be relieved of chasing that particular ghost. (no technical pun intended!)
Openreach engineering maintenance (joint remakes etc) ... just because the speed has dropped it does not immediately follow there is a deterioration of our asset-based access network
Of course not, but it
might. I don't think I mentioned speed at all in my original post. I do have personal experience of LOS and resyncs caused by just that though. The problem is that those things are potentially the collateral damage of maintenance on line-plant carrying 'live' broadband. The speed reductions may then follow as a
consequence. Can you give me a good reason why nobody should be bothered about that?
As I say, DLM is not my forte but we do know it acts on a 24hr basis. If it deems umpteen interventions in the network by an engineer to be detrimental, it will apply certain parameters. in the same breath, it should see the next 24hrs as being fault-free and act accordingly ??
You see, I think that is the false assumption that lies at the heart of my original post (that it will recover
quickly). Of course, it will 'act', but that could take 8 weeks for example. And if it was your line that was collateral damage, how happy would you be about that?
You could always experiment on yourself
: create a dozen resyncs in say an hour one morning; then repeat that again the next day after DLM has acted; then wait to see how quickly it recovers?
I don't have the answers I'm afraid.
Nobody expects
anyone to have (all) the answers. This is just an interesting discussion on what can happen, why it happens, what happens next, and is there a better way? I'm surprised nobody so far has said something like 'Well, we might be able to do something like that, but would you (all) be prepared to pay an extra £2/month on your line rental for it?'
The floor is now open to others....