I'm away, with tablet only, so rough calculations for now.
Those readings are 900 seconds apart.
The RS block counter increases by roughly 25 million; I make that roughly 180 user-data bytes per RS block, which is close to what the line stats say.
At the same time, RSCorr increases by roughly 19 million. That means we are seeing 19 errors in every 25 blocks - some 76% - being corrupted. But only corrupted by enough for FEC correction to always work.
If we assume this is just one bit error, happening at a regular rate, then it is 19 bit faults every 4500 bytes, or 1 fault every 237 bytes, or 1 fault every 1900 bits. This is high!
With the line running at 40 Mbps, an error every 1900 bits could suggest an impulse rate of 21 kHz ... but this calculation is only worthwhile if bits were transmitted linearly, one at a time.
However, DSL transmits many bits in parallel: lots of bits on each of many tones. I think it works out at 10,000 bits sent at the same instant, spread across 2739 tones, with this repeated 4,000 times per second.
Our error rate of 1 bit in 1,900 means we need 5 faulty bits in the batch of 10,000. Every batch of 10,000.
I'm wondering if this kind of error rate could mean that one tone is broken, and transmitting duff data ... but it can't be turned off because bit swapping has stopped. It would be the kind of fault the explains the high fault rate, the consistency, and the recoverability - and pulls in the other observable issue.
Any thoughts?
I'll be back later, when I can calculate more accurately.