I am assuming that the attenuation calculation depends on the bins’ usage pattern; that is, if some higher tones are in use then calculated attenuation will be ‘worse’ because those poor attenuation figures for those tone, because of their high frequency, will bring the total down. Is that correct?
If this is so, then attenuation is not really attenuation of the line, which may or may not be what we want, but attenuation of the signal. In any case asking for the “attenuation of the line” is meaningless anyway as you have to say “at what frequency?”. Over the years, using various modems and improved settings with lower downstream SNRM I have noticed that reported attenuation figures for downstream have crept up with every speed improvement. Twelve years ago d/s attn was reported as 60.5 dB and now albeit with a different modem it’s 64.5 dB and I attribute this to the usage of additional higher tones. My changing to G.992.3 (ADSL2) rather than G.992.1 (original ‘ADSL1’ spec) is one factor, because higher tones are now in use since ADSL2 supports 1-bit tones and old ADSL1 doesn’t iirc, thus putting additional weakest tones into service.
I don’t know enough about differences between models. If you are testing by switching between models then you are bound to get different downstream sync speeds unpredictably because of DLM and because of changes in line conditions as well as differences in the capabilities of the modems. If the sync speed is different that could mean that the bit-loading spectrum has changed. In that is the case then the basis for the attenuation calculation may have changed too if additional tones have come into use, or the reverse. So as always switching between models has to be done with care because of the ‘noise’ that the very act of switching produces in the results.
Is it the case that the details of the attenuation calculation are in the standards specs somewhere?