It might be and they might already be in the database - if I knew what it or they were.
The FEC and interleaving parameters are the R, D, I and N parameters from "--show" and "--stats"
- R and N show the amount of FEC overhead (R bytes in a block of N bytes is the extra parity data for FEC protection; it allows R/2 bytes to be corrected). 5% is low (old-style) or standard (with G.INP), 18-20% is standard old-style, 30% is very high.
- D and I show the amount of interleaving (depth and "width", aka interleaving block size). Multiplying D and I gives the relative scale of delays/latency.
- N should be a multiple of I.
I think we all need to talk more
I'm not sure what to add here, except how I go about things...
When I analyse someone's line, I'll tend to look at data in this order:
- Attenuation, incl pbParams - How "long" is the line?
- Hlog, QLN, SNR/tone and bits/tone ... Is the line acting right for that length? What is the noise environment? Crosstalk? UPBO?
- INP, INPRein, delay - What has DLM asked for? Is it light or heavy? Getting worse or better?
- R, D, I, N - What impact has the DLM settings had on the line settings? Bandwidth overhead and latency overhead
- Sync actual and attainable - Does this match with FEC overheads?
- BQM latency - Does this match with interleaving latency overhead?
- FEC, CRC, ES - What impact should we expect on DLM, old-style? Patterns across the day; 24 hour ES total
- RS/RSCorr/RSUncorr - If lots of FEC, I look at the proportions of these
- OHF/OHFerr - If lots of CRC, I look at the proportion of these
- rtx_tx, rtx_c, rtx_uc - How much retransmission is going on? How much re-retransmission? How much failure?
- LEFTRS, minEFTR - Is retransmission affecting the overall throughput?
When I look at the data, or charts of the data over time, I guess I'm looking for patterns, or trying to assess some items for quality. I wonder if we can figure any of that into our own KBD, or centre of excellence?