What file format can I get wireshark to accept if feeding a captured traffic file into it ? I have a .pcap format file attached to the earlier post.
Sounds as if wireshark would be able to help with interpretation.
What is so weird is that the performance with these same speed testers hasn’t always been bad. I got 1.56Mbps not 1.25-1.3 with the
https://speedtester2.aa.net.uk on 2019-07-31. I just don’t understand how it can vary so much when the sync rates haven’t varied - well in that case, they have varied, to tell the truth, but in the
opposite direction! The lowest sync rates are now much higher than they were back then
Back then, at 96.5% modem loading factor the expected total IP PDU throughput was 1640384 bps (or 1699880 at 100% mlf)
which means the measured performance is 95.10% of the theoretical maximum, which is incredibly good because we have to take some off for the overhead of TCP and IP headers, so it is roughly perfection. (The rate used for egress limiting = modem loading factor * protocol efficiency factor (i.e. allowance for bloat, additional bytes) * sync rate)
= Current situation : =
The Firebrick's internet access links have the following _upstream speeds_ (expressed as IP PDU rates) right now:
#1: 542777 bps
#2: 459596 bps
#3: 416745 bps
#4: 421786 bps
Total combined rate: 1.840904 Mbps
- These are the data rates used by the Firebrick on each link, expressed as IP PDU rates. They are always well below the upstream sync rate of each link because overheads due to protocol layers below L3 have been accounted for. The reduced upstream rate given here is the theoretical maximum calculated for each link in the most optimistic possible scenario, based on sending a large packet of a certain chosen size. A further reduction, the so-called 'modem loading factor' has also been applied too. This reduction is based on experience and it has been tuned to get best upstream performance whilst avoiding overloading the modems.
Modem loading factor of 95% was used on each link.
Fractional speed contributions:
#1: 29.484% [█████████████ ‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒]
#2: 24.966% [███████████ ‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒]
#3: 22.638% [██████████ ‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒]
#4: 22.912% [██████████ ‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒‒]—
I tried an experiment, set all the rates to the slowest rate above. Only got 1.28Mbps figure from the speedtest2.aa.net.uk. So having uneven egress rates doesn’t seem to be the cause. However, that doesn’t mean that the arrival times of the packets will be ‘right’ to make a stupid TCP receiver happy. I’m assuming that unequal actual line speeds - not egress rates - will still look like bad ‘jitter’ at the receiving end? (Because the ends of the incoming packets will arrive earlier if the link speed is faster, and I am only controlling the times at which the packets are sent, ie controlling the arrival time of the start of the packets, not the end.) I need either a more intelligent receiving TCP in every machine I ever talk to, or more intelligent scheduling of the load splitting so that the arrival time of the end of each packet can be set so as to keep stupid receivers happy ?