Kitz Forum

Broadband Related => ISPs => Topic started by: bogof on November 02, 2022, 10:09:15 PM

Title: 900Mbps+ Single thread throughput testing
Post by: bogof on November 02, 2022, 10:09:15 PM
I'm interested to see what factors might be limiting single thread performance across t'interweb.  My 900/115 BTW FTTP connection from AAISP performs great, but I'm interested to explore the limits.  I've noted eg performance dropping off as I get further away from home, with transfers from Hetzner's test servers being a bit bursty and prone to being backed off.  I suppose from reading around the subject a bit (thanks @Chrysalis) that this is perhaps expected for loss based algorithms.

For some testing I've set up 4 x iperf3 servers at AWS, 2 in London and 2 in Frankfurt.  Of the pairs at each geographic location, one has cubic and one has BBR congestion control.
At time of writing, the cubic speed test server downloads are around 757Mbps (Lon) 464Mbps (Fra).
The BBR speed test servers are around 849 (Lon) 835 (Fra).
Significant benefits from BBR as you get further away. 

I wonder how other ISPs and connections do in this respect against these 4 servers.  Probably only really interested to look at other connections in the ~gigabit+ range.  If you have such a connection, PM me the IP you'll test from and I can give you the server details to try and report back.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: EC300 on November 03, 2022, 08:23:47 AM
I think the further the data travels it is usually the case the speed drops, for all sorts of reasons. Things are more likely to get buffered over a longer distance and become a bit more bursty and I suspect it is more expensive for the ISP to reserve a large pipe for itself for international traffic. Could it be the case a lot of connectivity might still be allocated in 1 Gig lots to international destinations, so with 1000Meg connections we are fighting with others and will notice this more.  I also thought TCP slowed with increasing latency, or at least needed some tweaking to receive window sizes and things like that.  Just for fun see speed test o Sydney Australia.

AAISP peering arrangments are here: https://www.peeringdb.com/net/2077

IDNet (https://www.peeringdb.com/net/1108) who boast of peering arrangements internationally have peering to Europe, but only at 1Gig, and I found the same with them using speedtest servers abroad in that they were half the speed or less, presumably because it was just 1 Gig of capacity.

(https://pic.nperf.com/r/3414966778353675-MoAamqC2.png)
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 03, 2022, 09:10:40 AM
Might be interesting to try setting up some AWS instances that are further away than Frankfurt, too, for experimentation purposes.  Be interesting to see how much difference BBR makes on links that are that far away. 

It makes you wonder if you are better off with an ISP peering in the UK with fat interconnects to the peering sites, sharing foreign connectivity once you get into the UK peering networks, or into European locations directly but with only 1G links.   I wonder which is better if you're trying run fat single threads.

If you look at someone with scale like TalkTalk they're peered into Frankfurt with 100G, but spread across a squillion users..

PeeringDB is pretty eye-opening.  ROBLOX have 400G into Linx - wow.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: EC300 on November 03, 2022, 10:00:33 AM
You could ask AAISP what sort of arrangements they have and speeds you might expect to those destinations, they like techy questions.

Yes peeringDB is quite a useful tool to see what capacity ISPs have, certainly the smaller ones that are real ISPs so have their own connectivity as opposed to those just reselling.  Cerberus for example have several 1G links which seems a bit slow to me given they sell 1G fibre products, but it could be their faster customers are routed over the faster links and slower customers over their slower peering links.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Chrysalis on November 03, 2022, 10:46:29 AM
bbr is better in my opinion because it can detect congestion is on the way rather than after its already happening which means its less likely to over saturate a connection which happens on packet loss based algorithms, but because it relies less on packet loss it can also overcome certain types of problems, as seen on my aaisp connection where bbr seemed to bypass the issue on there.  Cubic is loss based so if it see's packet loss, it will back off, and can back off a huge amount, the problem being of course it is reacting to problems rather than foreseeing them so is prone to over saturating lines and hence on narrow pipes like DSL it can act like a DOS and overwhelm the connection breaking streaming etc whilst downloading a game.  If there is continuous loss cubic throughput will go very low probably.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: XGS_Is_On on November 03, 2022, 02:07:09 PM
I'm interested to see what factors might be limiting single thread performance across t'interweb.  My 900/115 BTW FTTP connection from AAISP performs great, but I'm interested to explore the limits.  I've noted eg performance dropping off as I get further away from home, with transfers from Hetzner's test servers being a bit bursty and prone to being backed off.  I suppose from reading around the subject a bit (thanks @Chrysalis) that this is perhaps expected for loss based algorithms.

https://en.wikipedia.org/wiki/Bandwidth-delay_product is the big one.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: XGS_Is_On on November 03, 2022, 02:15:58 PM
To add... I'd also recommend checking out the following to see who an ISP peers with:
- https://bgp.tools/ (I prefer this one as some data is real-time)
- https://bgp.he.net/ (Can be delayed by 24-48 hours)

I wouldn't pay too much attention to this. This only covers public peering sessions and it's very situational depending on the ISP how much of their edge is via public peering, how much private peering, not shown on those, how much on-net CDNs are handling and how much is IP transit, also not shown on those.

Hop count is irrelevant so not directly peering isn't a big deal and can occasionally be problematic. Capacity and latency are much better guides than which AS is connected to which and given transit is charged by the megabit per second per month it's in their interests to ensure that there's no congestion either on there network or on their connections to ISPs. ISPs congesting their transit are breaking their agreements and will be admonished to upgrade by the transit provider.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Ixel on November 03, 2022, 02:33:19 PM
I wouldn't pay too much attention to this. This only covers public peering sessions and it's very situational depending on the ISP how much of their edge is via public peering, how much private peering, not shown on those, how much on-net CDNs are handling and how much is IP transit, also not shown on those.

Hop count is irrelevant so not directly peering isn't a big deal and can occasionally be problematic. Capacity and latency are much better guides than which AS is connected to which and given transit is charged by the megabit per second per month it's in their interests to ensure that there's no congestion either on there network or on their connections to ISPs. ISPs congesting their transit are breaking their agreements and will be admonished to upgrade by the transit provider.

Nevertheless, it's not irrelevant or entirely useless information. I'd consider it to be more on the side of potentially supplemental. PeeringDB does list capacities of public peering exchange points, similarly to how some similar information is shown under the IX tab on bgp.tools for a specific ASN. PeeringDB has an advantage of showing a list of private peering facilities, notes and, if provided on the profile, the estimated traffic levels and traffic ratio.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: XGS_Is_On on November 03, 2022, 03:34:25 PM
Someone like Virgin Media, AS 5089, really breaks this. Lots of private peering not mentioned, mostly IP transit as they're a tier 1.

Certainly has interesting information, just trying to ensure people don't consider it authoritative as far as an ISP's connectivity goes. It won't cover the majority of any ISP's capacity.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Chrysalis on November 03, 2022, 06:14:31 PM
The slow peak time speeds reported yesterday for clouvider, its 110ms latency right now from aaisp.

Code: [Select]
Tracing route to lon.speedtest.clouvider.net [5.180.211.133]
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  192.168.1.1
  2     9 ms     9 ms     8 ms  218.53.155.90.in-addr.arpa [90.155.53.218]
  3     8 ms     8 ms     8 ms  k-aimless.thn.aa.net.uk [90.155.53.101]
  4     9 ms     9 ms     9 ms  linx-lon1.thn2.peering.clouvider.net [195.66.227.14]
  5     9 ms     9 ms     9 ms  10.1.10.77
  6     9 ms    41 ms    12 ms  10.106.66.255
  7     *        *        *     Request timed out.
  8     *        *        *     Request timed out.
  9     *        *        *     Request timed out.
 10   110 ms   109 ms   110 ms  10.1.10.19
 11   100 ms   100 ms    99 ms  10.1.10.80
 12   110 ms   131 ms   110 ms  185.245.80.0
 13  2528 ms  2118 ms  2154 ms  185.245.80.1
 14   103 ms   102 ms    98 ms  5.180.211.133
Title: Re: 900Mbps+ Single thread throughput testing
Post by: EC300 on November 03, 2022, 06:42:16 PM
The slow peak time speeds reported yesterday for clouvider, its 110ms latency right now from aaisp.

Code: [Select]
Tracing route to lon.speedtest.clouvider.net [5.180.211.133]
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  192.168.1.1
  2     9 ms     9 ms     8 ms  218.53.155.90.in-addr.arpa [90.155.53.218]
  3     8 ms     8 ms     8 ms  k-aimless.thn.aa.net.uk [90.155.53.101]
  4     9 ms     9 ms     9 ms  linx-lon1.thn2.peering.clouvider.net [195.66.227.14]
  5     9 ms     9 ms     9 ms  10.1.10.77
  6     9 ms    41 ms    12 ms  10.106.66.255
  7     *        *        *     Request timed out.
  8     *        *        *     Request timed out.
  9     *        *        *     Request timed out.
 10   110 ms   109 ms   110 ms  10.1.10.19
 11   100 ms   100 ms    99 ms  10.1.10.80
 12   110 ms   131 ms   110 ms  185.245.80.0
 13  2528 ms  2118 ms  2154 ms  185.245.80.1
 14   103 ms   102 ms    98 ms  5.180.211.133

Seeing the same on my AAISP line:

Code: [Select]
Pinging 5.180.211.133 with 32 bytes of data:
Reply from 5.180.211.133: bytes=32 time=99ms TTL=56
Reply from 5.180.211.133: bytes=32 time=104ms TTL=56
Reply from 5.180.211.133: bytes=32 time=108ms TTL=56
Reply from 5.180.211.133: bytes=32 time=109ms TTL=56

Ping statistics for 5.180.211.133:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 99ms, Maximum = 109ms, Average = 105ms

Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 04, 2022, 06:28:27 AM
You could ask AAISP what sort of arrangements they have and speeds you might expect to those destinations, they like techy questions.
They do have direct link with Amazon as they told me this (and all these tests I was running to AWS are running over that link), though I don't know how much.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 04, 2022, 07:34:30 AM
So, had some interesting results in.
@Chrysalis 's results against my servers were not overly disimilar to mine, though in their case the BBR didn't seem to improve matters. 
AAISP got a staffer to run tests on their line, and one on FTTP BTW did manage on the two london AWS servers to hit single thread line rates, and the Frankfurt ones were not far off.

So this gave me a thread to pull on as that is a similar connection but better performance.

The testing I had been doing previous was predominantly on the router, logged in via SSH. (Unifi Dream Machine SE), where AAISP's server would be line rate, but reduced to the AWS servers, improved with BBR.

I installed Ubuntu on the gigabit laptop I have here so I could use that as a PPPoE client.
From the Ubuntu laptop, routing via the Dream Machine as a DHCP client, even the single thread to the AAISP speedtest server was down to about 600Mbps (though oddly, the single thread test to AWS london non-BBR was notably faster than this).  Wired clients however can achieve line rate though in multithreaded tests.

So at this point it looks like the setup is on the edge once the overhead of routing through the Dream machine happens.

Running PPPoE on the laptop, my results are much more like from their staffer, the two London tests and even the Frankfurt BBR being very close to line rate.  The only odd anomaly is that it takes a while for the AA speedtest to "ramp up" to line rate often (maybe 10-15s)

I also used the same laptop and routed via the AAISP Technicolor router and it's similar to PPPoE on the laptop, though seems more prone to having the connection startup ramp effect. 

I think what this says to me at the moment is that for the gigabit single threads to make it all the way through from client to server everything has to be just right, and the Dream Machine doesn't really seem to be there, particularly when shoveling data between interfaces.  It can do the gigabit speed across multiple connections, but a single connection seems to have some kind of variability preventing achieving full rates.  I'm not sure what that ramping behaviour is, but as I was just running desktop Ubuntu, with GUI etc, I wouldn't really like to bet what is going on there.  It was seen on both the PPPoE on the laptop and the Technicolor, so perhaps it is a function of the laptop / ethernet card, but not when running the iperf3 tests on the Dream Machine router itself (though I did see it with a Hetzner file download).  And I have seen downloading big files from Hetzner to an AWS ramp like that over a large number of seconds, so it's not unique to this network or equipment.  It was interesting that testing from my laptop via the Dream Machine router was a bit faster to my AWS London cubic iperf3 server than AAISPs own server, I don't know if that points to any possible tweaks to that server.

I think what I'm going to do is try and get another router setup and retest with that, it will be a while before that happens.

Been very impressed with AAISPs response to this, they've been very helpful.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 04, 2022, 08:08:17 AM
Seeing the same on my AAISP line:

Code: [Select]
Pinging 5.180.211.133 with 32 bytes of data:
Reply from 5.180.211.133: bytes=32 time=99ms TTL=56
Reply from 5.180.211.133: bytes=32 time=104ms TTL=56
Reply from 5.180.211.133: bytes=32 time=108ms TTL=56
Reply from 5.180.211.133: bytes=32 time=109ms TTL=56

Ping statistics for 5.180.211.133:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 99ms, Maximum = 109ms, Average = 105ms
I'll look again this evening, but I note that  iperf3 tests from Clouvider to an AWS instance also got much slower at peak times, so I don't think it's unique to AAISP.

Edit: no need to wait til this evening, does look like that London clouvider server is a bit under the weather or poorly connected:

London server:
Code: [Select]
ubuntu@ip-172-31-15-201:~$ iperf3 -R -c lon.speedtest.clouvider.net -p 5209
Connecting to host lon.speedtest.clouvider.net, port 5209
Reverse mode, remote host lon.speedtest.clouvider.net is sending
[  5] local 172.31.15.201 port 45954 connected to 5.180.211.133 port 5209
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  23.9 MBytes   201 Mbits/sec                 
[  5]   1.00-2.00   sec  26.0 MBytes   218 Mbits/sec                 
[  5]   2.00-3.00   sec  26.5 MBytes   222 Mbits/sec                 
[  5]   3.00-4.00   sec  53.0 MBytes   445 Mbits/sec                 
[  5]   4.00-5.00   sec  64.1 MBytes   538 Mbits/sec                 
[  5]   5.00-6.00   sec  42.6 MBytes   357 Mbits/sec                 
[  5]   6.00-7.00   sec  34.3 MBytes   288 Mbits/sec                 
[  5]   7.00-8.00   sec  46.6 MBytes   391 Mbits/sec                 
[  5]   8.00-9.00   sec  35.0 MBytes   294 Mbits/sec                 
[  5]   9.00-10.00  sec  60.1 MBytes   504 Mbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   413 MBytes   347 Mbits/sec    0             sender
[  5]   0.00-10.00  sec   412 MBytes   346 Mbits/sec                  receiver

iperf Done.

VS Manchester:
Code: [Select]
ubuntu@ip-172-31-15-201:~$ iperf3 -R -c man.speedtest.clouvider.net -p 5209
Connecting to host man.speedtest.clouvider.net, port 5209
Reverse mode, remote host man.speedtest.clouvider.net is sending
[  5] local 172.31.15.201 port 55526 connected to 103.214.44.130 port 5209
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   239 MBytes  2.01 Gbits/sec                 
[  5]   1.00-2.00   sec   347 MBytes  2.91 Gbits/sec                 
[  5]   2.00-3.00   sec   355 MBytes  2.98 Gbits/sec                 
[  5]   3.00-4.00   sec   355 MBytes  2.98 Gbits/sec                 
[  5]   4.00-5.00   sec   355 MBytes  2.97 Gbits/sec                 
[  5]   5.00-6.00   sec   354 MBytes  2.97 Gbits/sec                 
[  5]   6.00-7.00   sec   354 MBytes  2.97 Gbits/sec                 
[  5]   7.00-8.00   sec   354 MBytes  2.97 Gbits/sec                 
[  5]   8.00-9.00   sec   354 MBytes  2.97 Gbits/sec                 
[  5]   9.00-10.00  sec   354 MBytes  2.97 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.04  sec  3.34 GBytes  2.86 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  3.34 GBytes  2.87 Gbits/sec                  receiver

iperf Done.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Chrysalis on November 04, 2022, 10:08:18 AM
That higher latency isnt unique to AAISP, the trace I posted showed the jump well after it left AAISP's area of control.  I pinged it today on VM and was 50ms and jumping around, when not long ago was 23ms.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: craigski on November 04, 2022, 10:25:28 AM
How come I see private IP addresses (10.x.x.x) on some of posts above showing the routes to the servers you are testing against?
Title: Re: 900Mbps+ Single thread throughput testing
Post by: EC300 on November 04, 2022, 10:54:53 AM
How come I see private IP addresses (10.x.x.x) on some of posts above showing the routes to the servers you are testing against?

As I understand it then it is because at those points we are routed over someones internal network and see the private IP address returned in the ICMP as the sender address.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Alex Atkin UK on November 04, 2022, 11:31:13 AM
As I understand it then it is because at those points we are routed over someones internal network and see the private IP address returned in the ICMP as the sender address.

Yeah its confusing, but routers on the Internet don't need to have a public IP address, they just need to know where to send the packet on next to reach the destination.

Its actually surprising this isn't done more, as its an easy way to avoid wasting public addresses within your backbone network and I'd imagine adds a level of security as there is no way to address that router directly from outside their network.

Actually its possible it IS more common, just those routers are configured to not reduce the TTL so will be invisible to traceroutes.  A lot of consumer routers used to do this so your first hop was always the ISP, your local router would not be shown.  I assume it fell out of fashion as seeing if your home router is the cause of latency is useful to know, especially if you're using WiFi.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: craigski on November 04, 2022, 12:36:06 PM
Yeah its confusing, but routers on the Internet don't need to have a public IP address, they just need to know where to send the packet on next to reach the destination.
Yes confusing, as there is no reverse lookup for private IP. How do you know who owns that private IP, where it is located, etc?

eg from one of the traceroutes above:

Code: [Select]
  5     9 ms     9 ms     9 ms  10.1.10.77
  6     9 ms    41 ms    12 ms  10.106.66.255
  7     *        *        *     Request timed out.
  8     *        *        *     Request timed out.
  9     *        *        *     Request timed out.
 10   110 ms   109 ms   110 ms  10.1.10.19
 11   100 ms   100 ms    99 ms  10.1.10.80
 12   110 ms   131 ms   110 ms  185.245.80.0

The above tells me that starting at 10.1.10.19 there may be some congestion, but where/what is 10.1.10.19 ?

Wouldn't it be better to test against servers that have a network path that use known registered networks, rather than going via an 'unknown' network?


Title: Re: 900Mbps+ Single thread throughput testing
Post by: Alex Atkin UK on November 04, 2022, 12:46:04 PM
Yes confusing, as there is no reverse lookup for private IP. How do you know who owns that private IP, where it is located, etc?

You don't, but likewise you have no idea how many routers you are going over that are invisible due to not incrementing the TTL or where those not responding to ping are located.

Actually one thing I don't understand is how we get a ping response in a traceroute from a router on a private network address?
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Chrysalis on November 04, 2022, 01:00:59 PM
Iperf seems a bit weird, e.g. using it on windows the cwnd wont go above 256k, and throughput is very low as a result, but then grabbing a file of ftp on same server came down at 800mbit with a much larger cwnd.  I think it might be struggling with modern network stacks.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 04, 2022, 01:02:24 PM
Iperf seems a bit weird, e.g. using it on windows the cwnd wont go above 256k, and throughput is very low as a result, but then grabbing a file of ftp on same server came down at 800mbit with a much larger cwnd.  I think it might be struggling with modern network stacks.
Iperf on Windows is I think maybe compiled using Cygwin, perhaps that's an issue?
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Ixel on November 04, 2022, 07:19:27 PM
I don't know if anyone here is still suffering some kind of an issue (congestion?) but for comparison about 10 minutes ago I did a traceroute as well. Fine here to Clouvider London.

Code: [Select]
Tracing route to lon.speedtest.clouvider.net [5.180.211.133]
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  home.router [x.x.x.x]
  2     4 ms     4 ms     4 ms  mythicbeasts.router [x.x.x.x]
  3     4 ms     4 ms     4 ms  lo.router-sov-aggr-b.mythic-beasts.com [93.93.133.13]
  4     5 ms     5 ms     4 ms  172.16.3.4
  5     4 ms     4 ms     4 ms  172.16.2.0
  6     4 ms     5 ms     4 ms  lo.router-sov-a.mythic-beasts.com [93.93.133.0]
  7     5 ms     5 ms     5 ms  linx-lon1.eq-ld8.peering.clouvider.net [195.66.225.184]
  8    21 ms    40 ms    47 ms  no-ptr.local [10.1.10.148]
  9     5 ms     5 ms     5 ms  no-ptr.local [10.1.10.69]
 10     6 ms     5 ms     6 ms  no-ptr.local [10.106.66.255]
 11     *        *        *     Request timed out.
 12     *        *        *     Request timed out.
 13     *        *        *     Request timed out.
 14     7 ms     5 ms    11 ms  no-ptr.local [10.1.10.19]
 15     6 ms     6 ms     6 ms  no-ptr.local [10.1.10.80]
 16     5 ms     5 ms     5 ms  185.245.80.0
 17     *        *        *     Request timed out.
 18     6 ms     4 ms     5 ms  5.180.211.133

Recent result from my automated route optimiser:
Code: [Select]
IP Address:       5.180.211.133
Preferred Route:  Not Set

Upstream Name           Packets Recvd   Avg Latency (ms)        Jitter (ms)
UK Dedicated Servers    6               6                       0
Misaka Networks         6               6                       0
Mythic Beasts           6               5                       0
The Constant Company    6               5                       0

Last Queried: 2022-11-04 19:10:58

Also from my Lightning Fibre IP address:
Code: [Select]
#  ADDRESS         LOSS  SENT  LAST     AVG  BEST  WORST  STD-DEV  STATUS                       
 1                  100%     4  timeout                                                           
 2                  100%     3  timeout                                                           
 3  194.24.162.40   0%       3  4.3ms    4.1  3.9   4.3    0.2                                   
 4  194.24.162.1    0%       3  4.1ms    3.9  3.4   4.3    0.4                                   
 5  195.66.225.184  0%       3  4.7ms    4.7  4.6   4.8    0.1                                   
 6  10.1.10.148     0%       3  4.6ms    4.6  4.5   4.6    0        <MPLS:L=704,E=0 L=21,E=0,T=1>
 7  10.1.10.69      0%       3  3.9ms    4.9  3.9   6.1    0.9      <MPLS:L=1278,E=0 L=21,E=0,T=2>
 8  10.106.66.255   0%       3  4ms      4.4  4     4.6    0.3                                   
 9                  100%     3  timeout                                                           
10                  100%     3  timeout                                                           
11                  100%     3  timeout                                                           
12  10.1.10.19      0%       3  4.9ms    5.2  4.9   5.7    0.4      <MPLS:L=263,E=0 L=20,E=0,T=1>
13  10.1.10.80      0%       3  5ms      5    4.9   5.1    0.1      <MPLS:L=19,E=0 L=20,E=0,T=2> 
14  185.245.80.0    0%       3  4.6ms    4.5  3.9   4.9    0.4                                   
15                  100%     3  timeout                                                           
16  5.180.211.133   0%       3  4.7ms    4.6  4.5   4.7    0.1
Title: Re: 900Mbps+ Single thread throughput testing
Post by: XGS_Is_On on November 05, 2022, 01:48:05 AM
Unifi Dream Machine kit doesn't seem well suited to routing. Great for other shiny functionality but below par compared with a Mikrotik with a base of the same A57 quad-core ARM CPU.

Mikrotik's CCR2004-1G-12S+2XS uses an Amazon Annapurna Labs Alpine v2 CPU with 4x 64-bit ARMv8-A Cortex-A57 cores running at 1.7 GHz.

The UDM SE uses 'Quad-Core ARM® Cortex®-A57 at 1.7 GHz'. A big more digging and it's the same Annapurna AL32400 found in the Mikrotik.

The Mikrotik can push over 3.5 Gbit/s in each direction over each core simultaneously. I've done so with one, running 4 iPerf threads and getting 15 Gbit/s throughput, 3.75 Gbit/s/core. Each thread ran on a single core.

No idea why the UDM SE runs so far short but from the tests I've seen it's way slower single thread and multiple thread. The UDM Pro was abysmal as a router in my experience too. Mine is sitting it my loft where it's been living for a year in shame. My great thanks to Ubiquiti for making me move to Mikrotik: I've not regretted it.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Alex Atkin UK on November 05, 2022, 08:06:07 AM
I've been of the opinion for a while that Ubiquiti aren't great with software, given their reputation of launching devices full of bugs and taking a few years for the firmware to stabalise.

Of course the simple answer could be they aren't letting you properly turn off IPS and DPI, or are configuring it poorly internally causing more overhead than necessary.

Also it seems their management app is still Java based, which is going to waste a ton of resources (posted on their forum).
Code: [Select]
6891 1778 902   S  4284m 107% 20% /usr/bin/java <- This is the Java process in a IDLE UDM.
Ive also seen claims of them using much cheaper board designs than their competitors, though not sure if/how that would reduce performance, just potentially impact reliability.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 05, 2022, 12:45:52 PM
Unifi Dream Machine kit doesn't seem well suited to routing. Great for other shiny functionality but below par compared with a Mikrotik with a base of the same A57 quad-core ARM CPU.

Mikrotik's CCR2004-1G-12S+2XS uses an Amazon Annapurna Labs Alpine v2 CPU with 4x 64-bit ARMv8-A Cortex-A57 cores running at 1.7 GHz.

The UDM SE uses 'Quad-Core ARM® Cortex®-A57 at 1.7 GHz'. A big more digging and it's the same Annapurna AL32400 found in the Mikrotik.

The Mikrotik can push over 3.5 Gbit/s in each direction over each core simultaneously. I've done so with one, running 4 iPerf threads and getting 15 Gbit/s throughput, 3.75 Gbit/s/core. Each thread ran on a single core.

No idea why the UDM SE runs so far short but from the tests I've seen it's way slower single thread and multiple thread. The UDM Pro was abysmal as a router in my experience too. Mine is sitting it my loft where it's been living for a year in shame. My great thanks to Ubiquiti for making me move to Mikrotik: I've not regretted it.
Maybe they're just trying to do too much, with too much SW cruft running all the time.  Just idling there is an avg of 25% of each core gone, and I think things like the traffic identification add some overhead as they all run suricata I believe.  But even disabled I seem to struggle to get much through it.  I don't think the PPPoE overhead helps, I've not tried a DHCP only setup.

It's a shame really as the hardware makes some nice choices; UDM Pro SE is quiet (if you don't put a disk in it), outwardly has a good spec CPU, good connectivity with 2x 10G SFP into the AL32400, 8 port POE gig switch and a single 2.5G copper port (albeit the latter is on a Realtek PCIe device).  It's really for a me a bit of a sweet spot device for a nice home setup, and the management UI is nice.

I wonder how the Mikrotik devices fair with PPPoE added in the mix for a WAN interface?  Maybe they're better at allowing you to allocate things to particular cores perhaps.

I'm not sure whether any of these ARM devices are really powerful enough if you end up doing much in SW (as opposed to offloaded).  As a noddy test I just ran iperf3 as a server on the Dream Machine, and connected to it over the localhost interface from itself.  The max throughput was 12Gb/sec.  Have you tried that out of interest on your Mikrotik? 

By comparison, a not particularly flash laptop (Lenovo E14 AMD Ryzen 4500U) is doing around 45Gb/sec on the same test in Ubuntu.  Passmark for that CPU is 11000 all/2400 single core.  A decrepit Mac Mini 2014 1.4GHz i5 (2400all/1400 single) is able to do 18Gbps in OSX.  None of these machines are useful to me though for routing duties owing to lack of ports.

For now I've just bought a Lenovo M720q to experiment with; it's got a pretty good CPU benchmark at 7500 all / 1900 single (much better than the Celerons in most of the Chinese Amazon router computer boxes that seem so common now), it's tiny and a nicely built unit, and has a PCIE slot that will take an quad intel GBE LAN card that I've picked up.   It will be an interesting datapoint to see how that behaves in some testing.   I don't need to route more than 1Gbps, but I do need it happening I think in a timely fashion to facilitate fast single thread speeds.

Title: Re: 900Mbps+ Single thread throughput testing
Post by: Alex Atkin UK on November 05, 2022, 01:58:38 PM
Yeah but the power consumption on those N5105 boxes and how dirt-cheap they are is great for up to 2.5Gbit.  Not sure where they top-out on PPP though.

Having a full PC box that can be powered over PoE is really nice.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 05, 2022, 04:56:19 PM
Yeah but the power consumption on those N5105 boxes and how dirt-cheap they are is great for up to 2.5Gbit.  Not sure where they top-out on PPP though.

Having a full PC box that can be powered over PoE is really nice.
They do look pretty neat.  I see the CPU isn't that bad, 4000 all /1400 single core.  I wonder where the limits are with PPPoE, I understand they're different with BSD v Linux.  I've always used OpenWRT in the past when rolling my own routers, but I am intrigued by OPNsense etc.
 
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 05, 2022, 05:07:07 PM
@skyeci ran a windows box and a linux box against my AWS test servers, the Windows iperf3 did top out just below 200Mbps again, but Linux was line rate for the London cubic server, marginally below for the London BBR, and 632/772 for Frankfurt cubic / BBR respectively.  Nice results from Zen FTTP BTW, it obviously can work great :)  Shame it didn't work out for me. 
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Chrysalis on November 05, 2022, 05:36:58 PM
Ixel still over 100ms here, they seem to have transit issues. 

For the benefit of this thread, bogof has now managed to get decent speeds on a download from one of my linux hetzner servers.  Likewise I got my performance over 800 on it as well.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Ixel on November 05, 2022, 06:54:57 PM
Ixel still over 100ms here, they seem to have transit issues.

I've just tried again and can confirm that I'm now getting this too with them sadly. The problem appears to become noticeable from the 14th hop (10.1.10.19).

Code: [Select]
  8     6 ms     5 ms     5 ms  no-ptr.local [10.1.10.148]
  9     5 ms     5 ms     5 ms  no-ptr.local [10.1.10.69]
 10     4 ms     5 ms     5 ms  no-ptr.local [10.106.66.255]
 .....
 14   100 ms    95 ms   116 ms  no-ptr.local [10.1.10.19]
 15    59 ms    62 ms    67 ms  no-ptr.local [10.1.10.80]
 16   106 ms   106 ms   106 ms  5.180.211.133

Code: [Select]
IP Address:       5.180.211.133
Preferred Route:  UK Dedicated Servers

Upstream Name           Packets Recvd   Avg Latency (ms)        Jitter (ms)
UK Dedicated Servers    59              92                      17
Misaka Networks         58              87                      22
Mythic Beasts           58              92                      17
The Constant Company    59              92                      17

Last Queried: 2022-11-05 18:47:12

They are definitely having some kind of a problem. :D
Title: Re: 900Mbps+ Single thread throughput testing
Post by: XGS_Is_On on November 07, 2022, 09:52:06 AM
I wonder how the Mikrotik devices fair with PPPoE added in the mix for a WAN interface?  Maybe they're better at allowing you to allocate things to particular cores perhaps.

I'm not sure whether any of these ARM devices are really powerful enough if you end up doing much in SW (as opposed to offloaded).  As a noddy test I just ran iperf3 as a server on the Dream Machine, and connected to it over the localhost interface from itself.  The max throughput was 12Gb/sec.  Have you tried that out of interest on your Mikrotik?

The CCR2004 doesn't have any offload, it's all CPU. PPPoE seems to have some impact though I couldn't be specific. It's not huge.

I haven't done a loopback iPerf test. I have run iPerf through the CCR at 15 Gbit/s with a fair amount of the CPU spent on I/O wait. It is good for 25 Gbit/s under certain circumstances.

The CCR is now sitting in a corner waiting to go on Fleabay. It's been supplanted by their CCR2116. This beast contains an AL73400 16 core, 2 GHz CPU and a Marvell Aldrin switch chip with some L3 HW offload functionality too. It laughs at what I ask it to do, so I'm hoping to find more for it soon. Screenshot of this endlessly abused bit of kit attached. Busy, busy, busy.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Chrysalis on November 07, 2022, 02:28:13 PM
looks real busy. :)
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Alex Atkin UK on November 07, 2022, 05:59:54 PM
I wonder if it can run Folding at Home? ;)
Title: Re: 900Mbps+ Single thread throughput testing
Post by: XGS_Is_On on November 10, 2022, 10:38:45 AM
I wonder if it can run Folding at Home? ;)

Certainly can. Supports containers. :)
Title: Re: 900Mbps+ Single thread throughput testing
Post by: Chrysalis on November 18, 2022, 08:38:48 PM
Just had to reboot the server I had tinkered with to get high single threaded gigabit speeds from Germany, Some kernel OOM issues allocating too much memory to TCP buffers. :p

I think I found the cuplrit though, tinkered with one too many things.  Might be a kernel bug though as the memory was stuck allocated even with no open TCP sockets.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 24, 2022, 11:43:50 AM
The CCR2004 doesn't have any offload, it's all CPU. PPPoE seems to have some impact though I couldn't be specific. It's not huge.

I haven't done a loopback iPerf test. I have run iPerf through the CCR at 15 Gbit/s with a fair amount of the CPU spent on I/O wait. It is good for 25 Gbit/s under certain circumstances.

The CCR is now sitting in a corner waiting to go on Fleabay. It's been supplanted by their CCR2116. This beast contains an AL73400 16 core, 2 GHz CPU and a Marvell Aldrin switch chip with some L3 HW offload functionality too. It laughs at what I ask it to do, so I'm hoping to find more for it soon. Screenshot of this endlessly abused bit of kit attached. Busy, busy, busy.
Be interesting to know what the loopback iperf3 looks like on either of those. 

--

I just built a new router box using a Lenovo M720q Tiny 1L PC and an Intel i340-t4 quad gigabit card; its loopback iperf3 under OpenWRT is 54Gbits/sec - the CPU shouldn't ever become a bottleneck for those adapters! It is languishing around a percent or so CPU usage, only spiking a bit when running a speedtest which does PPPoE, but still no where near maxing out any of the six cores - whereas the Unifi box would be at 100% with gigabit PPPoE.

Thanks to having a separate /29 block via AAISP, I've been able to make the Lenovo box do the PPPoE to AAISP via one of the gigabit ports, and just present an IP connection with a static address from the /29 block to the Unifi on another port, without having to do any double NAT.  This seems to work nicely, it has allowed me a bit more control over the connection than the Unifi box gives me, and reduced the load on it thanks to not having to do gigabit PPPoE anymore. 

This seems like quite an acceptable solution, as I do like the Ubiquiti management side, it's just a bit of a shame the boxes are a bit underwhelming for fat PPPoE connections.

Title: Re: 900Mbps+ Single thread throughput testing
Post by: XGS_Is_On on November 24, 2022, 01:33:29 PM
Be interesting to know what the loopback iperf3 looks like on either of those. 

Attached from the 2116 using TCP. I up the thread count a bit it still maxes at 70 but averages over 69. Fnarf.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 24, 2022, 03:34:27 PM
Phwoaarh would you get a load of that throughput... :)
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on November 24, 2022, 04:41:13 PM
Though multiple threads on this intel box still has it beat; close to 160Gbits / sec...
Of course, it has naff-all IO though, so kind of useless.  It does show how these ARM boxes can still lag behind in raw grunt against even fairly pedestrian PC HW.


Title: Re: 900Mbps+ Single thread throughput testing
Post by: XGS_Is_On on November 25, 2022, 12:16:15 AM
Not surprised. The CPU is there to ensure the box can saturate all its ports if L3HW acceleration isn't being used, and can converge a full BGP table quickly.

EDIT: Ah I didn't mention, the number I gave was 70 each way. The Intel still has it beaten of course but I guess not surprising. The ARM CPU is low power, low cost and is 2018 vintage. It's the same CPU in Mikrotik's 2216 router.
Title: Re: 900Mbps+ Single thread throughput testing
Post by: bogof on October 07, 2023, 11:34:04 PM
Just got a new FTTP 1000/115 connection with Unchained ISP.
Very impressed with single thread performance.

A handful of tests show I'm getting close to line rate single threads to London and Manchester Clouvider hosts, and ~800Mbps to Germany, which in all cases seem to beat the AAISP line I still have running.  Tests done at 11pm on a Saturday night.

Unchained:
Code: [Select]
root@OpenWrt:~# iperf3 -c man.speedtest.clouvider.net -R -p 5208 -t 30 -i 30
Connecting to host man.speedtest.clouvider.net, port 5208
Reverse mode, remote host man.speedtest.clouvider.net is sending
[  5] local 185.250.11.xx port 34260 connected to 103.214.44.130 port 5208
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  3.21 GBytes   919 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.04  sec  3.21 GBytes   918 Mbits/sec   68             sender
[  5]   0.00-30.00  sec  3.21 GBytes   919 Mbits/sec                  receiver

iperf Done.
root@OpenWrt:~# iperf3 -c lon.speedtest.clouvider.net -R -p 5208 -t 30 -i 30
Connecting to host lon.speedtest.clouvider.net, port 5208
Reverse mode, remote host lon.speedtest.clouvider.net is sending
[  5] local 185.250.11.xx port 33474 connected to 5.180.211.133 port 5208
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  3.24 GBytes   929 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.04  sec  3.25 GBytes   928 Mbits/sec   73             sender
[  5]   0.00-30.00  sec  3.24 GBytes   929 Mbits/sec                  receiver

iperf Done.
root@OpenWrt:~# iperf3 -c speedtest.wtnet.de -p 5209 -R -t 30 -i 30
Connecting to host speedtest.wtnet.de, port 5209
Reverse mode, remote host speedtest.wtnet.de is sending
[  5] local 185.250.11.xx port 38750 connected to 213.209.106.95 port 5209
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  2.79 GBytes   800 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.05  sec  2.82 GBytes   805 Mbits/sec  153             sender
[  5]   0.00-30.00  sec  2.79 GBytes   800 Mbits/sec                  receiver

iperf Done.

AAISP:
Code: [Select]
root@OpenWrt:~# iperf3 -c man.speedtest.clouvider.net -R -p 5208 -t 30 -i 30
Connecting to host man.speedtest.clouvider.net, port 5208
Reverse mode, remote host man.speedtest.clouvider.net is sending
[  5] local 81.2.116.xx port 48290 connected to 103.214.44.130 port 5208
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  2.25 GBytes   645 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.04  sec  2.25 GBytes   645 Mbits/sec  373             sender
[  5]   0.00-30.00  sec  2.25 GBytes   645 Mbits/sec                  receiver

iperf Done.
root@OpenWrt:~# iperf3 -c lon.speedtest.clouvider.net -R -p 5208 -t 30 -i 30
Connecting to host lon.speedtest.clouvider.net, port 5208
Reverse mode, remote host lon.speedtest.clouvider.net is sending
[  5] local 81.2.116.xx port 35468 connected to 5.180.211.133 port 5208
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  2.43 GBytes   697 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.04  sec  2.44 GBytes   697 Mbits/sec  604             sender
[  5]   0.00-30.00  sec  2.43 GBytes   697 Mbits/sec                  receiver

iperf Done.
root@OpenWrt:~# iperf3 -c speedtest.wtnet.de -p 5209 -R -t 30 -i 30
Connecting to host speedtest.wtnet.de, port 5209
Reverse mode, remote host speedtest.wtnet.de is sending
[  5] local 81.2.116.xx port 56904 connected to 213.209.106.95 port 5209
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  1.65 GBytes   472 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.04  sec  1.67 GBytes   478 Mbits/sec  1806             sender
[  5]   0.00-30.00  sec  1.65 GBytes   472 Mbits/sec                  receiver

iperf Done.