Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: 1 [2] 3 4 5

Author Topic: Same old upstream story for line 3  (Read 11672 times)

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #15 on: October 18, 2019, 01:22:17 AM »

Agreed. I think so. I talked to them about a year ago, but we didn’t get very far.

I don’t suppose there is any such thing as a fixed upstream sync rate, is there? Set at the BTOR end by AA? I’ve never heard of such,

There are two practical problems arising from this:

1. possible low sync rate upstream, because a retrain happened in the low SNRM period. This is a nuisance, but it is necessary and we could say ‘it is what it is’ unless the noise problem can somehow be fixed.

2. In the case where there is a retrain in the high SNRM period, we are going to end up with a dangerously low SNRM later on.

Since the first issue is not critical, the real problem then is the second issue. The question is how much trouble is it going to cause in the latter case? Can we get away with running at the low SNRM. Need to look at the stats. What exactly should I use as a decision criterion?

One other thing - I wonder if I could improve things by changing the configuration of the modem? Is there a way to prevent reconnecting at a silly sync rate ?

Logged

burakkucat

  • Respected
  • Senior Kitizen
  • *
  • Posts: 38300
  • Over the Rainbow Bridge
    • The ELRepo Project
Re: Same old upstream story for line 3
« Reply #16 on: October 18, 2019, 04:37:42 PM »

I have been thinking about the external "noise" problem on line 3.

By virtue of your ISP you have full control of the target SNRM for all lines. For three out of four lines you have found, by experiment, that a target SNRM of 3 dB leaves you with sufficient signal "headroom" (over any noise) and a corresponding increase in synchronisation speed (over that which would be obtained with the default target of 6 dB). In the case of line number three, suppose you were to configure it exactly as lines one, two and four. At some point, there is a possibility that the corresponding modem will re-train to accommodate a negative SNRM. Would that line eventually "find its own level"? Possibly.  :-\

My proposal is that you set all four lines with a target SNRM of 3 dB and for the next seven days monitor but do not react to any changes. Does your overall connection remain usable? Is Mrs Weaver able to do all she requires for her business? Are you still able to download the large files during the A&A off-peak period?
Logged
:cat:  100% Linux and, previously, Unix. Co-founder of the ELRepo Project.

Please consider making a donation to support the running of this site.

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #17 on: October 18, 2019, 10:25:21 PM »

I would say that 3dB on the downstream is fine for two reasons -firstly because the variation is small enough, and possibly secondly the PhyR has such a powerful beneficial effect that I can get away with a lot, covered up by L2 retransmissions. But I would expect to need a much greater upstream target SNRM to achieve the same error rate because of the unfortunate lack of L2 retx capability in upstream. (I think it is absent on upstream anyway)

It might just happen to be though that I can get away with low upstream SNRM anyway because of the character of the noise. One other thing. Shorter packets mean lower probability of error per packet, if so measured, rather than if measured per byte and there is a the preponderance of TCP ACKs going upstream which are always short unless piggybacks.

You’re absolutely correct about the ‘learn to stop worrying thing’. The only reason that I’m interested in error rate is because it might be a source of poor performance and the time taken to do a backup - 40mins or upload a photo - 1 min sometimes - is annoying.

I could do with some guidance as to how to assess the error stats and label them adequate or not, in terms of corrupt packets per unit time.

One other point, I think it’s important to have upstream link be reliable because if an ACK going upstream gets lost or corrupted then that means an avoidable TCP payload retransmission which could well be a full 1532 bytes, which I am paying for.

If I do find out that there is any serious corruption problem with line 3 though then I will think about axing it.
Logged

aesmith

  • Kitizen
  • ****
  • Posts: 1216
Re: Same old upstream story for line 3
« Reply #18 on: October 19, 2019, 09:34:23 AM »

Testing on our circuit when it was running error free, compared to when it was experiencing around 75 CRC per minute, showed no difference in download speeds.  I'm sure I took Wireshark traces at the time, but can't think where I saved them.   Looking at the two there were zillions of duplicate acks etc, showing that lots of packets were getting lost and retransmitted.  My informal conclusion was that an error rate of something like 1 in 250 wasn't significant compared to what goes on in the wider Internet.
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #19 on: October 19, 2019, 12:20:56 PM »

@aesmith so that would be a good order of magnitude metric to look for, CRCs per min > 250 during a flat out download and then an upload ? I would need to rescale that according to my link speed vs yours though as you will be getting x times more packets per second through compared to me.

If there were a problem with corruption of upstream packets then that would of course massively increase the probability of an error per packet because then the upstream packets would be those that are long, either ~16 times longer or 10.6 times longer depending on TCP IPv4/IPv6 with or without TCP timestamps. So that many times greater probability of uncorrected corruption of a packet, per packet.

Anyway, I would want to look at both directions.


(1) The figure of 16 for IPv6 = ( 1500 + 32 ) / (2*48) assuming an IPv6 TCP ACK is 60 + 32 bytes long which fits into 2 ATM cells and my total protocol stack overhead overhead is 32 bytes for everything from PPP to AAL5 CPCS, or if we add TCP timestamps at an extra 12 bytes that total of 60 + 12 + 32 would push us into 3 ATM cells for a TCP ACK so that would be 10.6 times longer packets.

(2) I’m telling myself, to a first-order approximation, if the probabilities are really small then n times longer means n times greater probability, or accurately pn = 1 - (1 - p)n. Is that right?
« Last Edit: October 19, 2019, 12:24:13 PM by Weaver »
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #20 on: October 19, 2019, 12:43:03 PM »

I’m suddenly having my doubts. I was talking about L3 PDUs before and doing an upload/download to generate some activity, but I’m completely wrong about that, am I not? In DSL with ATM, do idle cells count for CRC-counting testing purposes ? Do I actually need to do a flat-out transfer or not ? No L3 PDUs are getting sent but that doesn’t mean we cannot rack up a CRC count?

The CRCs counter - is showing CRCs in what per stated time interval? Is it per frame of L bits passed down to the lower layer as in the FEC frame structure table - so per call to a PMD.Bits.confirm primitive ? So I need to look up the L framing parameter and then multiply that by the number of bits in a packet to convert CRC stats to the probability of a packet of length n being corrupted.
Logged

aesmith

  • Kitizen
  • ****
  • Posts: 1216
Re: Same old upstream story for line 3
« Reply #21 on: October 19, 2019, 05:34:52 PM »

I must say haven't looked at where the CRC is applied, whether at PPP or ATM or what.  My reference example was based on a presumption that each CRC error would directly or indirectly kill off one IP packet.  On that basis with an IP profile of 3,500,000 then downloading at full speed with full size packets would be about 290 packets per second.  An error rate of 72 per minute means the loss of one in 241 or around 0.4%.

By the way, thinking about the reverse channel, packets lost as opposed to delayed would have much less effect.  Let's call them "ACKs" on the basis that if the higher level data flow is unidirectional, they don't increment SEQ. Their only purpose is to pack the ACK counter back to the sender.  If one is lost then nothing needs to be retransmitted, at worst the sender might see their window emptying sooner than it ideally should, but the situation would be fully resolved once the next ACK is received.  So like voice and video, these packet should be treated as "better never than late".
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #22 on: October 19, 2019, 10:49:02 PM »

@aesmith I was thinking exactly like you. But referring to the framing layer in G.992.3 table 7-1 this is at a level that is too low for there to be a knowledge of for example PPP or IP. And if you get CRC errors when there are no L3 PDUs being exchanged then that proves it.

In the context of table 7-1, you could get several FEC errors per L3 PDU of course, as each has only L bits (see table 7-6) in it. So counting the number of such errors would need to be converted to number of bad L3 PDUs by rescaling according to n_bytes_in_L3_PDU and the L framing parameter.
Logged

aesmith

  • Kitizen
  • ****
  • Posts: 1216
Re: Same old upstream story for line 3
« Reply #23 on: October 20, 2019, 08:53:50 AM »

Agreed, you could be losing fewer IP packets than the number of CRCs.  How to work that out is a bit beyond me.  Intuitively my feeling is that it's unlikely unless there's some special pattern to the CRCs, like coming together in rapid bursts.  If they're randomly spaced, 72 in every 60 seconds, then what is the chance of any one packet taking around 3.5ms experiencing more than one?
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #24 on: October 20, 2019, 10:53:21 AM »

Indeed, the chance is low. But the probabilities still need to be rescaled, my best guess is, from one event per L bits rescaled up to events per round_up( (1500 +32 ) / 48 ) * (48 + 5) * 8 bits ie per L3 PDU to be a useful scaled value which tells us how many L3 PDUs get corrupted.

For my line 1, For example
    L downstream = 805 bits,
    L upstream = 170 bits,
so the probability scale factor would be ( round_up( (1500 +32 ) / 48 ) * (48+5) * 8 / L ) ; which is to be multiplied by the probability of corruption of one FEC frame to get the per-L3 PDU corruption probability, assuming the probability is low so first order is good enough, which it is, and then some. This is all assuming that I have understood the modem stats correctly and read the G.992.3 spec section 7 correctly, that is.
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #25 on: October 20, 2019, 11:41:03 AM »

Someone check my reading -I’m pretty sure the preceding posts are wrong - I misremembered the definition of L - isn’t that parameter the number of bits per symbol ? So not what I want at all. I think in the preceding discussion where I wrote L, I should have written NFEC = M * K + R bytes, where K = B+1 ; from table 7-7

      Downstream        Upstream
         Bearer 0
MSGc:   53      12
B:      33      62
M:      4        1
T:      3        1
R:      10      14
S:      1.4509   3.6235
L:      805      170
D:      2        8


So therefore we have
 downstream : NFEC  = 4*(33+1)+10 = 146 bytes, and
    upstream :  NFEC  = 1*(62+1)+14 = 77 bytes.

So is it correct then to say that the CRC stat counter is the number of events per every unit of 146 bytes d/s or 77 bytes u/s ?



Edit: or would it be more appropriate to leave out R and just consider  M * K alone? As then we are comparing like with like? That is, bits from the upper layer data stream only. So instead, we would have 136 bytes downstream and 63 bytes upstream instead.
« Last Edit: October 20, 2019, 11:59:57 AM by Weaver »
Logged

aesmith

  • Kitizen
  • ****
  • Posts: 1216
Re: Same old upstream story for line 3
« Reply #26 on: October 20, 2019, 12:23:55 PM »

I'm not convinced you need that level of detail to analyse the impact of a given data rate.  My understanding is that CRC errors are uncorrected errors, meaning that data is lost.  I can't see that it makes any difference at what layer these are detected and recorded, until you reach a higher level correction.  I see this simply using my figures as 60 seconds during which 17,400 packets were transmitted, and during which there were 72 uncorrected errors.  Would it be over simplistic to say that the chance of one packet receiving two errors as being one in 241x241, ie one in 58,000?
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #27 on: October 20, 2019, 01:06:07 PM »

You’re absolutely 100% right. I’m working from one end, low to high and you’re working down from the other direction. My L give the link speed which you are using. I only looked into this as I wanted to know the answer to the question ‘errors in what/per unit’, that’s all. The probabilities are so low that first order is good enough and there’s effectively no chance of losing an error count because two errors occurred in the same FEC frame or even in the same L3 PDU. And if there is an error burst, then so what.

I only looked into this because I realised that I had forgotten what the units of the figures being quoted in the stats are.
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line
« Reply #28 on: October 28, 2019, 09:26:55 AM »

Now, for some reason, the upstream rate is far better: 1.44 Mbps from speedtest2.aa.net.uk (the journey is from the miserable ~1.2Mbps up to what I used to attain = ~1.5Mbps; best 1.56Mbps.) I don’t know why this is is say 0.150-0.20 Mbps better. Also all of the downstream results are quite a lot better that what they were. They were around 2.7-2.8Mbps before.

Live sync rates:
  #1:  Up Sync=560kb/s LoopLoss=40.4dB SNR=5.9dB ErrSec=0 HECErr=0 Cells=0
         Down Sync=3034kb/s LoopLoss=65dB SNR=3.0dB ErrSec=0 HECErr=N/A Cells=0
  #2: Up Sync=532kb/s LoopLoss=40.9dB SNR=6.4dB ErrSec=0 HECErr=0 Cells=0
         Down Sync=2836kb/s LoopLoss=64.5dB SNR=3.8dB ErrSec=0 HECErr=N/A Cells=0
  #3: Up Sync=452kb/s LoopLoss=40.4dB SNR=1.7dB ErrSec=1 HECErr=0 Cells=0
         Down Sync=3005kb/s LoopLoss=64dB SNR=3.5dB ErrSec=0 HECErr=N/A Cells=0
  #4: Up Sync=502kb/s LoopLoss=40.9dB SNR=6.3dB ErrSec=0 HECErr=0 Cells=0
         Down Sync=3004kb/s LoopLoss=64.5dB SNR=3.4dB ErrSec=0 HECErr=N/A Cells=0


I always take the highest results and discard the rest, because we are trying to measure the capabilities of the link itself, not a particular software implementation or protocol or a particular capability in a certain scenario.
« Last Edit: October 28, 2019, 09:30:15 AM by Weaver »
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Same old upstream story for line 3
« Reply #29 on: February 01, 2020, 02:44:11 PM »

Latest 24-hour pic for line #3 SNRM; upper, green is upstream, red is downstream:





So the question: what happened at 23:45 approx and then unhappened at 10:35.
« Last Edit: February 01, 2020, 02:53:01 PM by Weaver »
Logged
Pages: 1 [2] 3 4 5