I have accepted the statement with a different meaning.
e.g. right now there is a 6db target on all lines. Yet lines that are either banded or synced at the max line rate will have a higher snrm. This change I expect is not targeted at these lines and as such they are excluded from results. After all if you alter the target snrm of a line synced at 80mbit with say a 8db snrm, then it will stay at 8db snrm, it doesnt need the extra snr.
The data is probably from lines not in those 2 categories.
If I was openreach I wouldnt want to be raising the 80 cap to 88, that will only trigger more speed related faults. What happens e.g. if someone initially gets 88 on the 3db target, then after some crosstalk drops to 81, they would still be at 80 on the original cap and not be bothered, but then would be more likely to raise a fault on the 88 cap.
Not to mention I think there is no marketing benefit from 88 over 80, VM already beat that headline speed.
What I expect has happened, is openreach after deciding to not do a blanket vectoring rollout (probably due to cost), then had a discussion how else could they bring speeds closer to the headline speeds, and dropping target snrm was a solution proposed with minimal cost. I would also expect error levels to be pretty low to allow a line to lower its snrm, as there is no point e.g. lowering a line with 2500 ES a day, then at 3db it goes into red status, and what happens then? interleaving whilst on 3db? or move back to 6db and start jumping back and forth. More likely there will be an amber like on the current profiles, and it will only move lines down in green status.