Recently I have lost a lot of downstream speed - down from 1.55 Mbps to 1.36 Mbps combined total TCP payload (I think) - anyway, wherever speedtest2.aa.net.uk and the testmy.net upload test report.
This has had a very noticeable bad effect on the quality of FaceTime video calls. Idiot FaceTime goes out to the internet and back (I am pretty certain) even when all it needs to do is cross the 300 Mbps WLAN. In fact the way I have the WLAN set up, I default to being on a different SSID than my wife, so if I FaceTime call her in the house then if FaceTime were only sane there would be a full 300Mbps allocation for each of us and we would not be experiencing contention between the two of us. (However, I presume that each of us would have to divide up the bandwidth between our own tx and rx, as I am telling myself that this is not a full duplex WLAN medium?) But incredibly I am having to make do with half of 1.3 Mbps as the FaceTime bottleneck. And there is the cost of all those bytes received and transmitted, cost incurred for no reason.
But in addition to the upstream loss there has been a loss of very very roughly 1 Mbps IPv4 TCP payload downstream, down from an earlier 10.6 Mbps to around 9.69 Mbps. Measurement s seem to be all over the place on this. The actual downstream sync rates now report we’re captured at around 2.2 - 3.0 dB downstream SNRM and upstream at 6 dB upstream SNRM. Sync rates are as follows:
Live sync rates:
#1: down 2737 kbps, up 522 kbps
#2: down 2716 kbps, up 505 kbps
#3: down 2788 kbps, up 412 kbps
#4: down 2883 kbps, up 499 kbps
For upstream, the egress rate limiters are set to IP PDU rates of
upstream_sync * 0.884434 * 0.965
where :
* 0.884434 =‘protocol efficiency factor’ is my opinion of the protocol bloat - expansion in the number of bytes set due to the wasteful extra junk added by ATM and other protocol layers below IP (exclusive), ie PPP inclusive and below based on a 1500 byte packet being sent. This is the most optimistic least bad case. The waste will be far far worse in some scenarios, eg for short packets, and for an ATM AAL5 payload of a length that does not fit well into a multiple of 48 bytes certain number of ATM cells so leaving empty space in a cell. This should be the fastest theoretical possible rate that you can drive the modems at, expressed as IP PDU rate. This may well be impractical though and could overload the modem, because of considerations missing from my calculation and because in a realistic scenario not all packets will be of the length chosen here (1500 bytes in this case) which was the easiest packet to deal with as efficiency is optimal. If some packets have other lengths then that would mean that the modem would be overloaded and won’t be able to keep up.
and
* 0.965 =‘modem loading factor’ is my choice for the amount of stress put on the modem in the upstream direction, a multiplier which is applied to the sync rate times the protocol effieciency factor to give the IP PDU egress rate . This has been changed. It used to be: 0.70 or 0.75 for the slowest line (line 3), and ~0.95 for all other lines. For reasons unknown it now seems to be that that strategy of ‘low factor for slowest line’ does not give any advantage and in fact that strategy may give slightly worse results, but despite a large number of tests it is difficult to get accurate results as there is always a lot of statistical ‘noise’ in the upstream speed results, even with quite large uploads. So that strategy has now been abandoned and currently all lines use the same modem loading factor (MLF) since experiments show that now equal MLFs is optimal.
As far as I can work out the reason things are now the way they are is to do with the effect of the unequal speeds of the lines on the behaviour of TCP implementations at one end of the other. Because of the misbehaviour of line 3 upstream, which was only 330k u/s sync rate last week
In earlier posts, we were talking about temperature change seasonally. I think this might have to be the answer, conductivity vs temperature or else the state of some joints vs temperature. But regarding the latter, it can’t be dodgy occasional joints as the results are the same across all lines. It seems to me that the apparent frequency-independence of the phenomenon, because the fractional change is the same (bearing in mind the error bars) for upstream and downstream. I haven’t gone through the bits per bin allocations and compared with an old snapshot though, which I should do to check out any ideas of frequency independence.
* Two attached files - zipped up full stats on all modems:
I have attached complete stats for all four modems, see below .zip files simply containing zipped-up plain text taken straight from the modems’ Broadcom CLI raw output.
- The first one is straight raw command line stuff captured.
- The ‘Burakkucat’ file is the same stuff, still plain text, but formatted in a way that I hope will prove to be helpful to the Kuro Neko’s tools and save a bit of finger wear, assuming I have got the formatting correct, that is, should my friend want to peruse these stats.
[Moderator edited to remove the size zero files.]