I'm sure Mr Cat or Kitz will be pouring over the Westie link, to gain a better idea of how it works !!

I'll pass. Because if it is what I think it is, then I think I already know from whats been said so far in this thread.
1) Its an invention that came about at about the same time as system level vectoring.
2) Its something Openreach hopefully won't have any need to implement, as there are better alternatives.. ie vectoring, g.fast, FTTP. Openreach's long term plan appears to be to drive fibre to nearer the home, rather than look at something that is best used with SRA.
3) Openreach doesnt use Low power mode, thus its not as important.
Trying to think of a really easy way to explain it:-
ProblemSome lines become unstable due to them syncing before their crosstalker. Whilst the line may sync up at the set Target SNR, once the crosstalker comes online, then that 6.3dB could become 3dB and causing the line to incur far more errors at 3dB than it would have under normal circumstances at 6.3db.
If you are one of those countries where low power mode is used then this is going to become even more problematic each time the crosstalker enters low power mode.
BackgroundWhen looking at QLN graph, I or any other skilled linestats reader can easily spot signs of crosstalk.
If fact I could spot it way before we got QLN just from the SNR per tone as thats the way I learnt to spot it on adsl2. Only difference is QLN inverts the crosstalk as bumps rather than troughs. Its why I will sometimes also ask for bitload graphs rather than just QLN. We look for distinctive peaks and troughs whose shape is specifically indicative of crosstalk.
Again getting really simplistic, think how a child would draw a seagull, it doesnt always look like that but for the sake of envisioning this example think of seagull shape.
Solution.What if a system could analyise bit load, SNR per tone and QLN and look for seagulls, deducing that this is crosstalk.
System then over time recognises which tones for that individual line are impacted by crosstalk and knows what is normal crosstalk behaviour for that line. It is then also capable of recognising when the crosstalker is offline or in LP mode.
So to stop the line syncing at 6.3 dB and then suddenly be jolted down to 3dB when crosstalk comes on line.. what it does is create an artificial noise mask at the tones (and only at those specific frequencies) that the cross-talker impacts the line at.
ResultMasks on the tones which are usually affected by crosstalk by applying artifical noise [margin] at those tones normally affected by x-talk so that those tones wont bit load more than usual and restricting the minimum SNR at those tones.
Stops line from getting an artificially inflated sync speed that will only drop when the crosstalker comes back online.
Ideal for use with SRA and countries which use LP mode.
----
Note... above typed completely off top of head without checking any tech journals, thus no tech jargon or references or googlings... just pure blurb and so could be wrong.
Hands extremely sore took waaaay longer than anticipated to type above so hitting send without checking for errors and signing back off for the night.