@bogof I understand your point. It all depends on exactly what it is that you’re trying to measure, and most people say one thing but do another, the first being (1) ‘measure my line’ and the second being (2) ‘measure the performance of two TCP software implementations and the path between them’ and that isn’t the same as ‘my line’ at all.
I hear what Chrys says, and he makes his needs clear to me so I now understand properly. He specifically wants to investigate (2), not (1) which is what I had initially thought because I wasn’t listening to him properly.
With genuine respect bogof, I do want to know (1), so hence the UDP test that has no interactions with TCP implementations’ characteristics, and in fact there are of course transport protocols other than TCP. I might be using QUIC or some application layer protocol over UDP for some particular case. My lines remain what they are though, despite differing choices of software and protocols.
I don’t understand why speedtester authors don’t give users a choice depending on what they want to measure. For (1), a carefully designed UDP based testing protocol that ramps up the rate until the pipe is just full, so we get a measurement of the bottleneck, which should be your line, providing the test server isn’t overloaded and I’ve seen some servers that have a reservation system to prevent that from happening. And seeing as what you want, ie (2), is a very reasonable and important request, we need a ‘real-world scenario’ test of what a user might expect in terms of payload throughput using some version of IP 4 or 6, and TCP with or without timestamps, so we do use TCP, and have four possible TCP options.
Chrys makes a good point about bonding sensitivity. I suspect that the Firebrick doesn’t behave in quite as much of a remote-end TCP-friendly fashion when we’re doing upstream tests because I happen to have ill-matched upstream speeds on my lines. Currently one line is about 650 kbps and another is only 350 kbps upstream. An appropriately designed bonding handling system could be built though with a packet egress scheduler that ensures that there’s no packet reordering and that also makes remote-end TCP happy by giving stable arrival times so that RTT measurements don’t go all wrong and confuse TCP’s algorithms. The latter is probably why I don’t get such great upstream performance with TCP sometimes. abut the other odd thing is that the performance reported by speedtesters in upstream tests varies quite a lot from month to month. The variation can be occasionally even as large as total=1.0 Mbps to 1.6 Mbps, and that same software one month vs another month. I’ve posted a lot on these mysteries in the past so the interested reader might do a search for old threads.
In contrast, AA’s Firebricks handling downstream traffic splitting are highly efficient and very TCP-friendly even though my lines are not perfectly matched. Mind you, when there are no faults, the lines’ downstream sync rates are usually reasonably closely matched. Right now my line 2 is dead and my line 4 is getting sicker by the day, both with hollow curve disease, line 4 giving about 2.4 Mbps downstream sync rate, while line 3, my new line, manages an incredible downstream 3.04 Mbps. Taking away all known overheads and translating that into IPv6 TCP performance, a little arithmetic shows that we get TCP payload throughput of about 84% of sync rate.