@ejs
I think when it says doubled booked its talking about the overheads required for FEC...
and also when using INP that can further reduce the line rate.
INP is about applying sufficient protection against noise to ensure that 'x' DMT symbols can be recovered in the event of noise burst.
AIUI when it starts going on about the BER, I think its trying to say, that when you adjust the Bit Error Rate to increase error protection, then this 'takes away' SNRm. I think coding gain is what makes the differential between the true SNR and the SNR margin.
ie this is what makes the SNRm increase/decrease at each stage of INP.
eg.
INP = 3 may take an additional 3bB of SNRm from the actual SNR value
INP = 4 may take an additional 6dB of SNRm from the actual SNR value
[I made up those SNRm figures - they could vary depending on various factors such as line rate and Im not about to start playing with complicated formulae]So when its saying 'double booked' I'm assuming they mean FEC overheads
and any adjustments to the BER which will adjust the SNRm further. I could be wrong, that's the way I read it.
I think what the Broadcom whitepaper is saying, is that if you switch on interleaving to fix errors, and then what interleaving does is also add coding gain,
Interleaving itself doesnt fix errors, it chops up and spreads the data, so that if there is a noise burst, FEC has an increased chance of being able to correctly re-assemble the data.
Its FEC/INP adds coding gain. Coding gain is the amount of SNR used by Error Protection to reach the desired BER rate compared against a line without RS encoding.
Although you can have FEC without interleaving, interleaving on its own without is useless. But unfortunately it is interleaving that adds latency and delay. This is a brief and crude example why delay occurs
Original data packets:
abcd efgh ijkl mnop
Interleaved packets:
aeim bfjn cgko dhlp
So not only the time taken to chop up and reassemble the data, but it has to wait longer for data packet containing 'abcd' to arrive... because it cant re-assemble it until packet containing '
mnop' also arrives.
If we increase the depth we get:
abcde fghij klmno pqrst uvwxy
Interleaved:
afkpu bglqv chmrw dinsx ejoty
So now we have to wait for a 5 packet spread which takes longer to receive before data can be re-assembled.. and more delay.
The above are just examples to show in a very simplified way. Packets dont really contain only 4 or 5 chars but by using the alphabet its easier to see what is happening to the data stream.
I know my ADSL2 line works better with interleaving on, and I get a little more downstream bandwidth,
I'm not disputing that you may do.
But the DSL theory is that FEC will always reduce line rate because of overheads. There was some advances in RS coding algorithms ie (s=1/2 mode) which actually made for more efficient overheads, but once FEC is on, then those overheads are going to be there.
When on ADSL2+, FEC reduced my line rate of 24Mbps down to about 18.5Mbps
rather than using interleaving to optimise the line towards more bandwidth rather than lower latency, interleaving only gets used to try to correct errors, but it's not really powerful enough at that.
Interleaving cant correct any errors at all. Its spreads data to help protect against burst errors. Its FEC which does the all the error correcting.
When they say Interleaving improves FEC, its because FEC has more chance of being able to recover from a single noise burst.
eg if the noise burst was
abcd e--- ijkl mnop
on interleaved data, that same noise burst would be
aeim b--- cgko dhlp
after its been de-interleaved then the actual damage of the noise burst would be
a-cd e-gh i-kl m-op
FEC stands little chance of recovering packet
e---, but it can easily correct
a-cd e-gh i-kl m-op and therefore no data loss.