Anyone know why my line has interleave enabled?
It looks like it is becoming the default setting for every line.
I had a 100m line, running at around 50 ES per day, but I was converted.
From my limited understanding interleaving as to be on for g.inp to work.
I don't think it *has* to be turned on. However, the new configuration still has two jobs to perform: fixing REIN and SHINE.
The retransmission part of G.INP is best suited to SHINE, but REIN is best handled by FEC and interleaving still. The difference is that the settings for FEC and interleaving, used alongside G.INP retransmission, give a much reduced impact.
I know but I want my fast path ping back
I don't don't understand why my interleave depth is so high 16 down and 8 up.
Do you know how much latency you are complaining about? Have you calculated it?
Interleaving works by putting data into a two-dimensional array, row by row; then emptying data out of it column by column. The overall delay incurred is proportional to the (depth multiplied by width); it always beats me why people are worried by interleaving depth, when width matters too.
When DLM intervened with a standard setting of INP=3 and delay=8, the (depth x width) calculation would result in, for example, (1421 x 64) = 91,000. A data volume of 91,000 therefore maps to 8ms.
With G.INP active, a typical equivalent calculation is (16*139) = 2,200. That is 40 times smaller than the old setting, so probably amounts to 0.2ms.
Are you really worried about 0.2ms?
Do you know what this 0.2ms penalty buys you elsewhere? It turns as many noise errors into FECs as it can, rather than them becoming retransmissions. Something re-transmitted takes much longer and becomes seen as jitter in the path, which isn't an ideal result either.
This is as bad as it can get with noise on the line as my cab is now full.
Until a subscriber changes to one who lies closer to you within the cable bundle. Or, much worse, a second cab gets added.