Its entirely possible that differing types of noise and noise correction can have different effects. Their are different types of noise and different types of error correction to handle each. For example TCM (Trellis) is effective for background noise yet useless for REIN/SHINE. RS Encoding is better at handling REIN but cant cope with SHINE. However RS carries a lot of overheads so isnt efficient unless you have that type of noise. (INP &) G.INP is far more efficient, but Im not sure if its any better at coping with SHINE or by how much. Its why in some cases you have to use all algorithms in tandem.
As for spectral, Im not sure entirely what that is all about and how it works - afaik its something to do with the equipment learning which tones are noisy and taking them out of use. The idea behind this is (same as why some dslams block ham tones or RF filters) by taking the tones completely out of use, then it stops noise spread to neighbouring tones and large noise bursts perhaps taking a line out completely.. or at least stabilising the line because it will be recording less CRCs E/Secs. Therefore by switching off spectrum Ive no idea why it should make things better.. in fact I didnt know that domestic routers had that capability. Perhaps ASUS are dabbling with it. Has anyone asked what 'spectrum' is actually doing.
Bitswap supposedly takes out particularly noisy tones anyhow until next resync, is that how it 'learns'? .. or have I completely misunderstood what ASUS are trying to do.