While looking at @BE1's issue yesterday, my eye caught a couple of earlier issues, so I thought I'd reply to them.
First, @tommy45's changes...
yeah so imo tommy, your US is exactly back how it was before on fast path. With D set to 1 and R set to 16.
So openreach did what I told them to do a month ago. Leave US on fast path config.
Having just had a quick look at the R and N values, then yep. Theyve rolled back on the upstream as to how things were before they applied g.inp.
I agree. It looks very typical of a pre-G.INP setup. with one little proviso that I'll mention later.
The MSGc value has changed though (Number of bytes in the overhead channel message.) but Im not sure what impact that will have on the line. Theres more overhead than with g.inp, but less than previous
I haven't really paid much attention to the changes of these before, but I think the overhead channel is the part that moves into bearer 1 when G.INP is running. As G.INP was turned off upstream, the overhead channel in that direction has moved back into bearer 0. This matches the relative changes seen in MSGc between the two bearers; I have no idea what a "-6" signifies though.
I value (Interleaver block size) is also less and its interesting that its now exactly half of the N value. These are config params so not sure how or if it will make any difference because Interleaving depth is one so effectively non-interleaved. But it certainly looks like theyve been tweaking some parameters so that if interleaving and RS is switched on it will have less impact. WWwombat is best when it comes to working out how much of an impact.
When depth is 1, then the interleaver block size being half the RS block size is a little strange - on the face of it, there is no apparent need for it to be. But I don't think it has any impact on either the FEC overhead amounts or on the interleaving delay.
Note that, as far as I'm aware, the R, D, I and N parameters are all negotiated by the modems independently of DLM. The value negotiated have to achieve the settings that DLM *does* send into the DSLAM (INP, INPrein and delay), but otherwise it is for the modems to negotiate.
No idea what RRC bits are.
RRC = Retransmission Return Channel. It has 12 bits, to which 12 bits of FEC are added, and carries the ACKs and NACKs for the received blocks, telling the transmitter what needs to be re-transmitted.
If re-transmission only happens downstream, then the RRC only needs to exist upstream.
kitz did you miss my post? you seem to have forgotten error correction has always been enabled on US FTTC.
... But yes I did know. I think its mentioned in one of my posts further up in the same thread... and this is why I wanted to compare the 'R' value.. which in itself is proof that Openreach has for a while been applying FEC on the upstream. - which we knew anyhow that theyve been doing this for at least 18months now.
I just wanted to point out that Openreach have had FEC running (without interleaving) on upstream forever. My earliest stats, November 2011, show this effect on an original 40/10 line, though these stats were taken shortly after BT had started to use the 17a profile, but shortly before they started to allow 80/20 profiles.
However, and this is the proviso I mentioned earlier, my impression has largely been that upstream FEC protection has only appeared when it is effectively free - i.e. where the FEC overhead can be applied freely because attainable is higher than the package speed. I know that isn't 100% true, as some places had upstream FEC when their speeds were lower, but "largely".
Obviously FEC is added deliberately alongside interleaving when DLM intervenes properly on the upstream, but that is a different process.
What we do have to be really careful about now is stressing the difference between Error Correction (FEC) and Interleaving.. and yes they do appear to have ramped Error Correction up We may also have to be careful about the definition of FAST path and Interleaved with depth of 1 even though the result for the EU is practically the same :/
Yes. VDSL2 has two latency paths: one fastpath, and one interleaved path. However, the interleaved path can be configured with a depth of 1, making it acts the same as the fastpath, even though it isn't actually the fastpath.
Unfortunately, some people read D=1 to mean fastpath, when it isn't necessarily strictly true.
I recall reading that, with G.INP active, all user data goes down the interleaved latency path - though it might still be configured with D=1.
It was attainable speed , my sync is still 19999 (20mbps) as the attainable is still above now by only 2mbps
Looking at MDWS, that drop of US attainable speed is mysterious. It isn't caused by a resync (or any change of parameters, whether G.INP, FEC or otherwise), and neither is the recovery back to the higher speed. Power remains the same, and the only matching graph is a slight rise in upstream signal attenuation for U1.
I have seen my line do an equivalent jump between US attainable values before now. It was equally mysterious.
I note that an attainable of around 30Mbps seems out of line - far too high - compared with the downstream attainable of 83Mbps. I do wonder about the effects of UPBO on the statistics we see upstream, and whether we only ever get to see an artificial picture.
If I look at the graph of your line over 90 days - from before G.INP started - the US power appears to gradually climb over the entire period. This obviously *ought* to make a difference to the US attainable values over that time, but doesn't. That suggests your noise environment is also changing.
I'm also seeing some SES seconds and until this borked fix was rolled out the circuit hadn't generated any since the 18th march 2015,
As you had G.INP active in the upstream for all of the intervening period, alongside FEC+interleaving combined, your line would have experienced a very different error picture. Errors would have been more likely to be fixed (because interleaving makes FEC more effective); errors that caused a block to fail would have been retransmitted, and likely transmitted successfully. Considerably fewer ES's would have happened and, as a consequence, there's a considerably lower likelihood that ES's would ever get severe enough to be classed as SES's.