Please remember that the SNRM is defined in terms of the error rate. The SNRM is technically defined as the maximum increase in noise power for which the modem could meet the configured bit error ratio (usually 1 bit error in 10
7 bits). So the lower the SNRM gets, the higher the error rate gets. And squeezing more bits into the already highly loaded tones is only going to reduce the SNRM.
Of course the modem will be doing the bit-swaps in the first place to keep the error rate down. And it would only be swapping them back based on its assessment of the current conditions.
Also remember that at the start, the modem would have assigned as many bits as suitable to each tone in order to maximize bandwidth. And the maximum bits per tone is 15, so the really good tones might be fully loaded with 15 bits from the start.
My point is, whilst these things seem like a good idea and we wonder why they haven't been implemented, generally they haven't been widely adopted for good reason.
But monitored tones
have been widely implemented on most
modern DSL modems. You can see "monitorTone: On" listed in the output of the
xdslcmd command.
Since we haven't seen any bitloading data from Weaver's modems, we don't actually know if the lack of support for monitored tones is the reason for the sharp drop in and staying low SNRM. It's a plausible explanation for the behaviour, but it may actually be due to something else.