Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: 1 [2] 3

Author Topic: 900Mbps+ Single thread throughput testing  (Read 7521 times)

craigski

  • Reg Member
  • ***
  • Posts: 294
Re: 900Mbps+ Single thread throughput testing
« Reply #15 on: November 04, 2022, 10:25:28 AM »

How come I see private IP addresses (10.x.x.x) on some of posts above showing the routes to the servers you are testing against?
Logged

EC300

  • Member
  • **
  • Posts: 47
Re: 900Mbps+ Single thread throughput testing
« Reply #16 on: November 04, 2022, 10:54:53 AM »

How come I see private IP addresses (10.x.x.x) on some of posts above showing the routes to the servers you are testing against?

As I understand it then it is because at those points we are routed over someones internal network and see the private IP address returned in the ICMP as the sender address.
Logged

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5289
    • Thinkbroadband Quality Monitors
Re: 900Mbps+ Single thread throughput testing
« Reply #17 on: November 04, 2022, 11:31:13 AM »

As I understand it then it is because at those points we are routed over someones internal network and see the private IP address returned in the ICMP as the sender address.

Yeah its confusing, but routers on the Internet don't need to have a public IP address, they just need to know where to send the packet on next to reach the destination.

Its actually surprising this isn't done more, as its an easy way to avoid wasting public addresses within your backbone network and I'd imagine adds a level of security as there is no way to address that router directly from outside their network.

Actually its possible it IS more common, just those routers are configured to not reduce the TTL so will be invisible to traceroutes.  A lot of consumer routers used to do this so your first hop was always the ISP, your local router would not be shown.  I assume it fell out of fashion as seeing if your home router is the cause of latency is useful to know, especially if you're using WiFi.
« Last Edit: November 04, 2022, 11:36:36 AM by Alex Atkin UK »
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

craigski

  • Reg Member
  • ***
  • Posts: 294
Re: 900Mbps+ Single thread throughput testing
« Reply #18 on: November 04, 2022, 12:36:06 PM »

Yeah its confusing, but routers on the Internet don't need to have a public IP address, they just need to know where to send the packet on next to reach the destination.
Yes confusing, as there is no reverse lookup for private IP. How do you know who owns that private IP, where it is located, etc?

eg from one of the traceroutes above:

Code: [Select]
  5     9 ms     9 ms     9 ms  10.1.10.77
  6     9 ms    41 ms    12 ms  10.106.66.255
  7     *        *        *     Request timed out.
  8     *        *        *     Request timed out.
  9     *        *        *     Request timed out.
 10   110 ms   109 ms   110 ms  10.1.10.19
 11   100 ms   100 ms    99 ms  10.1.10.80
 12   110 ms   131 ms   110 ms  185.245.80.0

The above tells me that starting at 10.1.10.19 there may be some congestion, but where/what is 10.1.10.19 ?

Wouldn't it be better to test against servers that have a network path that use known registered networks, rather than going via an 'unknown' network?


Logged

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5289
    • Thinkbroadband Quality Monitors
Re: 900Mbps+ Single thread throughput testing
« Reply #19 on: November 04, 2022, 12:46:04 PM »

Yes confusing, as there is no reverse lookup for private IP. How do you know who owns that private IP, where it is located, etc?

You don't, but likewise you have no idea how many routers you are going over that are invisible due to not incrementing the TTL or where those not responding to ping are located.

Actually one thing I don't understand is how we get a ping response in a traceroute from a router on a private network address?
« Last Edit: November 04, 2022, 03:01:29 PM by Alex Atkin UK »
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7411
  • VM Gig1 - AAISP CF
Re: 900Mbps+ Single thread throughput testing
« Reply #20 on: November 04, 2022, 01:00:59 PM »

Iperf seems a bit weird, e.g. using it on windows the cwnd wont go above 256k, and throughput is very low as a result, but then grabbing a file of ftp on same server came down at 800mbit with a much larger cwnd.  I think it might be struggling with modern network stacks.
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: 900Mbps+ Single thread throughput testing
« Reply #21 on: November 04, 2022, 01:02:24 PM »

Iperf seems a bit weird, e.g. using it on windows the cwnd wont go above 256k, and throughput is very low as a result, but then grabbing a file of ftp on same server came down at 800mbit with a much larger cwnd.  I think it might be struggling with modern network stacks.
Iperf on Windows is I think maybe compiled using Cygwin, perhaps that's an issue?
Logged

Ixel

  • Kitizen
  • ****
  • Posts: 1282
Re: 900Mbps+ Single thread throughput testing
« Reply #22 on: November 04, 2022, 07:19:27 PM »

I don't know if anyone here is still suffering some kind of an issue (congestion?) but for comparison about 10 minutes ago I did a traceroute as well. Fine here to Clouvider London.

Code: [Select]
Tracing route to lon.speedtest.clouvider.net [5.180.211.133]
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  home.router [x.x.x.x]
  2     4 ms     4 ms     4 ms  mythicbeasts.router [x.x.x.x]
  3     4 ms     4 ms     4 ms  lo.router-sov-aggr-b.mythic-beasts.com [93.93.133.13]
  4     5 ms     5 ms     4 ms  172.16.3.4
  5     4 ms     4 ms     4 ms  172.16.2.0
  6     4 ms     5 ms     4 ms  lo.router-sov-a.mythic-beasts.com [93.93.133.0]
  7     5 ms     5 ms     5 ms  linx-lon1.eq-ld8.peering.clouvider.net [195.66.225.184]
  8    21 ms    40 ms    47 ms  no-ptr.local [10.1.10.148]
  9     5 ms     5 ms     5 ms  no-ptr.local [10.1.10.69]
 10     6 ms     5 ms     6 ms  no-ptr.local [10.106.66.255]
 11     *        *        *     Request timed out.
 12     *        *        *     Request timed out.
 13     *        *        *     Request timed out.
 14     7 ms     5 ms    11 ms  no-ptr.local [10.1.10.19]
 15     6 ms     6 ms     6 ms  no-ptr.local [10.1.10.80]
 16     5 ms     5 ms     5 ms  185.245.80.0
 17     *        *        *     Request timed out.
 18     6 ms     4 ms     5 ms  5.180.211.133

Recent result from my automated route optimiser:
Code: [Select]
IP Address:       5.180.211.133
Preferred Route:  Not Set

Upstream Name           Packets Recvd   Avg Latency (ms)        Jitter (ms)
UK Dedicated Servers    6               6                       0
Misaka Networks         6               6                       0
Mythic Beasts           6               5                       0
The Constant Company    6               5                       0

Last Queried: 2022-11-04 19:10:58

Also from my Lightning Fibre IP address:
Code: [Select]
#  ADDRESS         LOSS  SENT  LAST     AVG  BEST  WORST  STD-DEV  STATUS                       
 1                  100%     4  timeout                                                           
 2                  100%     3  timeout                                                           
 3  194.24.162.40   0%       3  4.3ms    4.1  3.9   4.3    0.2                                   
 4  194.24.162.1    0%       3  4.1ms    3.9  3.4   4.3    0.4                                   
 5  195.66.225.184  0%       3  4.7ms    4.7  4.6   4.8    0.1                                   
 6  10.1.10.148     0%       3  4.6ms    4.6  4.5   4.6    0        <MPLS:L=704,E=0 L=21,E=0,T=1>
 7  10.1.10.69      0%       3  3.9ms    4.9  3.9   6.1    0.9      <MPLS:L=1278,E=0 L=21,E=0,T=2>
 8  10.106.66.255   0%       3  4ms      4.4  4     4.6    0.3                                   
 9                  100%     3  timeout                                                           
10                  100%     3  timeout                                                           
11                  100%     3  timeout                                                           
12  10.1.10.19      0%       3  4.9ms    5.2  4.9   5.7    0.4      <MPLS:L=263,E=0 L=20,E=0,T=1>
13  10.1.10.80      0%       3  5ms      5    4.9   5.1    0.1      <MPLS:L=19,E=0 L=20,E=0,T=2> 
14  185.245.80.0    0%       3  4.6ms    4.5  3.9   4.9    0.4                                   
15                  100%     3  timeout                                                           
16  5.180.211.133   0%       3  4.7ms    4.6  4.5   4.7    0.1
« Last Edit: November 04, 2022, 07:23:42 PM by Ixel »
Logged

XGS_Is_On

  • Reg Member
  • ***
  • Posts: 479
Re: 900Mbps+ Single thread throughput testing
« Reply #23 on: November 05, 2022, 01:48:05 AM »

Unifi Dream Machine kit doesn't seem well suited to routing. Great for other shiny functionality but below par compared with a Mikrotik with a base of the same A57 quad-core ARM CPU.

Mikrotik's CCR2004-1G-12S+2XS uses an Amazon Annapurna Labs Alpine v2 CPU with 4x 64-bit ARMv8-A Cortex-A57 cores running at 1.7 GHz.

The UDM SE uses 'Quad-Core ARM® Cortex®-A57 at 1.7 GHz'. A big more digging and it's the same Annapurna AL32400 found in the Mikrotik.

The Mikrotik can push over 3.5 Gbit/s in each direction over each core simultaneously. I've done so with one, running 4 iPerf threads and getting 15 Gbit/s throughput, 3.75 Gbit/s/core. Each thread ran on a single core.

No idea why the UDM SE runs so far short but from the tests I've seen it's way slower single thread and multiple thread. The UDM Pro was abysmal as a router in my experience too. Mine is sitting it my loft where it's been living for a year in shame. My great thanks to Ubiquiti for making me move to Mikrotik: I've not regretted it.
Logged
YouFibre You8000 customer: symmetrical 8 Gbps.

Yes, more money than sense. Story of my life.

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5289
    • Thinkbroadband Quality Monitors
Re: 900Mbps+ Single thread throughput testing
« Reply #24 on: November 05, 2022, 08:06:07 AM »

I've been of the opinion for a while that Ubiquiti aren't great with software, given their reputation of launching devices full of bugs and taking a few years for the firmware to stabalise.

Of course the simple answer could be they aren't letting you properly turn off IPS and DPI, or are configuring it poorly internally causing more overhead than necessary.

Also it seems their management app is still Java based, which is going to waste a ton of resources (posted on their forum).
Code: [Select]
6891 1778 902   S  4284m 107% 20% /usr/bin/java <- This is the Java process in a IDLE UDM.
Ive also seen claims of them using much cheaper board designs than their competitors, though not sure if/how that would reduce performance, just potentially impact reliability.
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: 900Mbps+ Single thread throughput testing
« Reply #25 on: November 05, 2022, 12:45:52 PM »

Unifi Dream Machine kit doesn't seem well suited to routing. Great for other shiny functionality but below par compared with a Mikrotik with a base of the same A57 quad-core ARM CPU.

Mikrotik's CCR2004-1G-12S+2XS uses an Amazon Annapurna Labs Alpine v2 CPU with 4x 64-bit ARMv8-A Cortex-A57 cores running at 1.7 GHz.

The UDM SE uses 'Quad-Core ARM® Cortex®-A57 at 1.7 GHz'. A big more digging and it's the same Annapurna AL32400 found in the Mikrotik.

The Mikrotik can push over 3.5 Gbit/s in each direction over each core simultaneously. I've done so with one, running 4 iPerf threads and getting 15 Gbit/s throughput, 3.75 Gbit/s/core. Each thread ran on a single core.

No idea why the UDM SE runs so far short but from the tests I've seen it's way slower single thread and multiple thread. The UDM Pro was abysmal as a router in my experience too. Mine is sitting it my loft where it's been living for a year in shame. My great thanks to Ubiquiti for making me move to Mikrotik: I've not regretted it.
Maybe they're just trying to do too much, with too much SW cruft running all the time.  Just idling there is an avg of 25% of each core gone, and I think things like the traffic identification add some overhead as they all run suricata I believe.  But even disabled I seem to struggle to get much through it.  I don't think the PPPoE overhead helps, I've not tried a DHCP only setup.

It's a shame really as the hardware makes some nice choices; UDM Pro SE is quiet (if you don't put a disk in it), outwardly has a good spec CPU, good connectivity with 2x 10G SFP into the AL32400, 8 port POE gig switch and a single 2.5G copper port (albeit the latter is on a Realtek PCIe device).  It's really for a me a bit of a sweet spot device for a nice home setup, and the management UI is nice.

I wonder how the Mikrotik devices fair with PPPoE added in the mix for a WAN interface?  Maybe they're better at allowing you to allocate things to particular cores perhaps.

I'm not sure whether any of these ARM devices are really powerful enough if you end up doing much in SW (as opposed to offloaded).  As a noddy test I just ran iperf3 as a server on the Dream Machine, and connected to it over the localhost interface from itself.  The max throughput was 12Gb/sec.  Have you tried that out of interest on your Mikrotik? 

By comparison, a not particularly flash laptop (Lenovo E14 AMD Ryzen 4500U) is doing around 45Gb/sec on the same test in Ubuntu.  Passmark for that CPU is 11000 all/2400 single core.  A decrepit Mac Mini 2014 1.4GHz i5 (2400all/1400 single) is able to do 18Gbps in OSX.  None of these machines are useful to me though for routing duties owing to lack of ports.

For now I've just bought a Lenovo M720q to experiment with; it's got a pretty good CPU benchmark at 7500 all / 1900 single (much better than the Celerons in most of the Chinese Amazon router computer boxes that seem so common now), it's tiny and a nicely built unit, and has a PCIE slot that will take an quad intel GBE LAN card that I've picked up.   It will be an interesting datapoint to see how that behaves in some testing.   I don't need to route more than 1Gbps, but I do need it happening I think in a timely fashion to facilitate fast single thread speeds.

« Last Edit: November 05, 2022, 12:48:40 PM by bogof »
Logged

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5289
    • Thinkbroadband Quality Monitors
Re: 900Mbps+ Single thread throughput testing
« Reply #26 on: November 05, 2022, 01:58:38 PM »

Yeah but the power consumption on those N5105 boxes and how dirt-cheap they are is great for up to 2.5Gbit.  Not sure where they top-out on PPP though.

Having a full PC box that can be powered over PoE is really nice.
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: 900Mbps+ Single thread throughput testing
« Reply #27 on: November 05, 2022, 04:56:19 PM »

Yeah but the power consumption on those N5105 boxes and how dirt-cheap they are is great for up to 2.5Gbit.  Not sure where they top-out on PPP though.

Having a full PC box that can be powered over PoE is really nice.
They do look pretty neat.  I see the CPU isn't that bad, 4000 all /1400 single core.  I wonder where the limits are with PPPoE, I understand they're different with BSD v Linux.  I've always used OpenWRT in the past when rolling my own routers, but I am intrigued by OPNsense etc.
 
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: 900Mbps+ Single thread throughput testing
« Reply #28 on: November 05, 2022, 05:07:07 PM »

@skyeci ran a windows box and a linux box against my AWS test servers, the Windows iperf3 did top out just below 200Mbps again, but Linux was line rate for the London cubic server, marginally below for the London BBR, and 632/772 for Frankfurt cubic / BBR respectively.  Nice results from Zen FTTP BTW, it obviously can work great :)  Shame it didn't work out for me. 
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7411
  • VM Gig1 - AAISP CF
Re: 900Mbps+ Single thread throughput testing
« Reply #29 on: November 05, 2022, 05:36:58 PM »

Ixel still over 100ms here, they seem to have transit issues. 

For the benefit of this thread, bogof has now managed to get decent speeds on a download from one of my linux hetzner servers.  Likewise I got my performance over 800 on it as well.
Logged
Pages: 1 [2] 3
 

anything