Lines dropped in the early hours of this morning. I don’t know if this could be due to BTOR working overnight ? Is that possible.
The line 4 sync rate which was bad last week went back to normal when the line was repaired; but last week its downstream had been down to 80% of what it had been before. Now following the blips it’s down to a bit below its normal downstream, as are all the three repaired lines now. Each of lines 2,3,4 should be around 2.8 - 2.9 Mbps downstream at least:
Live sync rates:
#1: down 2854 kbps, up 666 kbps
#2: down 2496 kbps, up 547 kbps
#3: down 2652 kbps, up 579 kbps
#4: down 2563 kbps, up 505 kbps
The upstream rates are miraculous though : by a mile higher than anything ever seen before at 666k (best prev 560k). And line 3 upstream is now cured, its upstream speed massively increased - from say 350k to 579k - which brings it level with the others. And that too is presumably going to bring an enormous improvement, so you would hope; more nearly level speeds without one causing problems for TCP because it is so much slower that the round trip times for that pipe are weird. The sum of the IP PDU rates upstream @ 96.5% modem loading factor and 0.884434 protocol efficiency is 1.96Mbps which is significantly higher than the highest ever seen before, around 1.6-1.7Mbps (depending on how sickly line 3 was). So if you correct that figure for TCP and IP header overheads that should give an idea for a maximum possible TCP combined throughput in an ideal world, and something line 97% of
However speedtest2.aa.net.uk does not give good results at @ 1.28 Mbps upstream.
Firebrick current upstream rate limiters' IP PDU tx rates (egress speeds), in-force right now ::
#1: 568416 bps
#2: 466852 bps
#3: 494164 bps
#4: 431006 bps
Total combined rate: 1.960438 Mbps
Fractional speed contributions:
#1: 28.994% [█████████████ ‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒]
#2: 23.814% [███████████ ‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒]
#3: 25.207% [███████████ ‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒]
#4: 21.985% [██████████ ‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒]--
Obtained from live-querying the Firebrick.
So the speedtest result is pretty awful at only 68.6% of the expected throughput figure for IPv6+TCP+timestamps or 67% for IPv4+TCP no timestamps. I just don’t know why this is so at one time it was fine, then it wasn’t and now we don’t have the speed inequalities to blame. Earlier I was absolutely certain that data corruption was to blame, and I suppose that is possible, maybe the sync rates are incredibly high because some kit was turned off and that made an atypically quiet noise environment which - speculating further - was not sustainable because after the sync rate upstream had been established the noise sources came back leaving the links struggling to cope. So will need to look at stats.
| TCP | | | TCP @ loading | TCP bps @ 100% | |
TCP TS+IPv6 efficiency: | 95.200000% | of IP PDU |
TCP+IPv6 efficiency: | 96.000000% | of IP PDU |
TCP TS+IPv4 efficiency: | 96.533333% | of IP PDU |
TCP+IPv4 efficiency: | 97.333333% | of IP PDU |
TCP TS+IPv6 rate: | 84.198113% | of sync rate | | 1866337 bps | 1934031 bps | ; @1.27Mbps = 0.68584 |
TCP+IPv6 rate: | 84.905660% | of sync rate | | 1882020 bps | 1950283 bps | ; @1.27 Mbps = 0.68012 |
TCP TS+IPv4 rate: | 85.377358% | of sync rate | | 1892476 bps | 1961118 bps | ; @1.27 Mbps = 0.67636 |
TCP+IPv4 rate: | 86.084906% | of sync rate | | 1908160 bps | 1977370 bps | ; @1.27Mbps = 0.67080 |