DTU doesn't have a fixed size. It's a multiple of either 53 byte ATM cells or 65 byte PTM codewords wrapped in some overhead.
That makes sense, because the DTU is then a very appropriate unit such that the code that stitches together newly received retransmitted DTUs into the saved existing good parts of the earlier large PDU does not have to deal with for example a part of an ATM cell’s worth of new DTU payload content or half a PTM 65-byte codeword. Perhaps that’s not really an important issue for simplifying the code, but an appropriate DTU size choice can’t hurt.
Can anyone tell me what the likely value or range of values of the multiple is ? And what determines the choice of the value ?
I do wonder if I ought to work out the DTU size in the current situation because then I could say "there are n uncorrected errors per b bytes of received data and each error means the loss/corruption of d bytes" with an exact report on the data loss and a summary percentage shown to the user if I give out full details in ‘verbose’ mode. If there is no ARQ, then you would have to report different numbers: CRC count and the total size of the lost DSL PDUs vs the received total download size.
But going back to the PhyR situation, one bad DTU means one entire (possibly larger) DSL layer-x PDU lost as half a PDU is no use. I suspect that a bad DTU might stuff up more than one DSL layer-x PDU where multiple layer-x DSUs are packed into one layer-x DTU, is that right ? I need to re-read the ADSL2 spec on this point and I also need to remind myself of what the various correct DSL sublayers’ names are, so the above could be made less vague and confusing. Mea culpa maxima.
I’m sure I have been told this, but once again I forget. Are there going to be more errors if a download is in progress than if the link is idle ? I’m assuming the answer could be very different for the cases of VDSL2, ADSL2+PTM and ADSL2+ATM ?
I’m thinking that the link is always busy in ATM even if there’s no user data in transit ? I have no idea about PTM though ?
Looking back, I see that at various places I expressed opinions that were wrong and I think that was because I hadn’t spotted or had not understood some of Kitz’s points.
Just to once again check my understanding, I’m assuming that ES counting is exactly driven by the
CRC count increment events, and the value of CRCs and ES is different because you can occasionally get several CRC events within one 1 s time quantum and that whole group of CRCs still only counts as one ES.
I presume that I made the correct choice by picking ES as the health metric in my modem-stats DSL link wellness assessment program, but CRCs would have been a reasonable choice too. Do you agree ?
And even PhyR or G.INP retransmission-corrected errors are not shown in the CRC or ES value ? That is, if there is a RS-uncorrected error then it’s not shown in the ES or CRC total if PhyR or G.INP successfully recovers from the situation within a specified max time limit ?
In my iPad wellness program, I thought about looking at the PhyR-related stats that my modems show. This would be a pain because some modems won’t have PhyR and in some situations the DSLAM won’t support PhyR or G.INP, so I would have to deal with all those alternatives, although that wouldn’t be a big pain in the code. The syntax of these stats might vary between modems and will probably be different for G.INP stats compared with PhyR stats, and that parsing would be a pain to handle. I certainly wasn’t keen to do a lot of work unless there was some real reason why looking at these stats was essential, and if ES or CRC count does effectively summarise the health of the link including absolutely all L2 error correction techniques available, then there’s no reason to do any unnecessary extra work. When I had the idea that the CRC count might show exactly how many RS-uncorrected PDUs there are, never mind those later corrected by L2 retx ie ARQ ie PhyR or G.INP, then I did think about looking at the PhyR/G.INP stats.