Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: [1] 2 3

Author Topic: 900Mbps+ Single thread throughput testing  (Read 7550 times)

bogof

  • Reg Member
  • ***
  • Posts: 436
900Mbps+ Single thread throughput testing
« on: November 02, 2022, 10:09:15 PM »

I'm interested to see what factors might be limiting single thread performance across t'interweb.  My 900/115 BTW FTTP connection from AAISP performs great, but I'm interested to explore the limits.  I've noted eg performance dropping off as I get further away from home, with transfers from Hetzner's test servers being a bit bursty and prone to being backed off.  I suppose from reading around the subject a bit (thanks @Chrysalis) that this is perhaps expected for loss based algorithms.

For some testing I've set up 4 x iperf3 servers at AWS, 2 in London and 2 in Frankfurt.  Of the pairs at each geographic location, one has cubic and one has BBR congestion control.
At time of writing, the cubic speed test server downloads are around 757Mbps (Lon) 464Mbps (Fra).
The BBR speed test servers are around 849 (Lon) 835 (Fra).
Significant benefits from BBR as you get further away. 

I wonder how other ISPs and connections do in this respect against these 4 servers.  Probably only really interested to look at other connections in the ~gigabit+ range.  If you have such a connection, PM me the IP you'll test from and I can give you the server details to try and report back.
« Last Edit: November 02, 2022, 10:21:20 PM by bogof »
Logged

EC300

  • Member
  • **
  • Posts: 47
Re: 900Mbps+ Single thread throughput testing
« Reply #1 on: November 03, 2022, 08:23:47 AM »

I think the further the data travels it is usually the case the speed drops, for all sorts of reasons. Things are more likely to get buffered over a longer distance and become a bit more bursty and I suspect it is more expensive for the ISP to reserve a large pipe for itself for international traffic. Could it be the case a lot of connectivity might still be allocated in 1 Gig lots to international destinations, so with 1000Meg connections we are fighting with others and will notice this more.  I also thought TCP slowed with increasing latency, or at least needed some tweaking to receive window sizes and things like that.  Just for fun see speed test o Sydney Australia.

AAISP peering arrangments are here: https://www.peeringdb.com/net/2077

IDNet (https://www.peeringdb.com/net/1108) who boast of peering arrangements internationally have peering to Europe, but only at 1Gig, and I found the same with them using speedtest servers abroad in that they were half the speed or less, presumably because it was just 1 Gig of capacity.

« Last Edit: November 03, 2022, 08:35:37 AM by EC300 »
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: 900Mbps+ Single thread throughput testing
« Reply #2 on: November 03, 2022, 09:10:40 AM »

Might be interesting to try setting up some AWS instances that are further away than Frankfurt, too, for experimentation purposes.  Be interesting to see how much difference BBR makes on links that are that far away. 

It makes you wonder if you are better off with an ISP peering in the UK with fat interconnects to the peering sites, sharing foreign connectivity once you get into the UK peering networks, or into European locations directly but with only 1G links.   I wonder which is better if you're trying run fat single threads.

If you look at someone with scale like TalkTalk they're peered into Frankfurt with 100G, but spread across a squillion users..

PeeringDB is pretty eye-opening.  ROBLOX have 400G into Linx - wow.
Logged

EC300

  • Member
  • **
  • Posts: 47
Re: 900Mbps+ Single thread throughput testing
« Reply #3 on: November 03, 2022, 10:00:33 AM »

You could ask AAISP what sort of arrangements they have and speeds you might expect to those destinations, they like techy questions.

Yes peeringDB is quite a useful tool to see what capacity ISPs have, certainly the smaller ones that are real ISPs so have their own connectivity as opposed to those just reselling.  Cerberus for example have several 1G links which seems a bit slow to me given they sell 1G fibre products, but it could be their faster customers are routed over the faster links and slower customers over their slower peering links.
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7414
  • VM Gig1 - AAISP CF
Re: 900Mbps+ Single thread throughput testing
« Reply #4 on: November 03, 2022, 10:46:29 AM »

bbr is better in my opinion because it can detect congestion is on the way rather than after its already happening which means its less likely to over saturate a connection which happens on packet loss based algorithms, but because it relies less on packet loss it can also overcome certain types of problems, as seen on my aaisp connection where bbr seemed to bypass the issue on there.  Cubic is loss based so if it see's packet loss, it will back off, and can back off a huge amount, the problem being of course it is reacting to problems rather than foreseeing them so is prone to over saturating lines and hence on narrow pipes like DSL it can act like a DOS and overwhelm the connection breaking streaming etc whilst downloading a game.  If there is continuous loss cubic throughput will go very low probably.
« Last Edit: November 03, 2022, 10:50:26 AM by Chrysalis »
Logged

XGS_Is_On

  • Reg Member
  • ***
  • Posts: 479
Re: 900Mbps+ Single thread throughput testing
« Reply #5 on: November 03, 2022, 02:07:09 PM »

I'm interested to see what factors might be limiting single thread performance across t'interweb.  My 900/115 BTW FTTP connection from AAISP performs great, but I'm interested to explore the limits.  I've noted eg performance dropping off as I get further away from home, with transfers from Hetzner's test servers being a bit bursty and prone to being backed off.  I suppose from reading around the subject a bit (thanks @Chrysalis) that this is perhaps expected for loss based algorithms.

https://en.wikipedia.org/wiki/Bandwidth-delay_product is the big one.
Logged
YouFibre You8000 customer: symmetrical 8 Gbps.

Yes, more money than sense. Story of my life.

XGS_Is_On

  • Reg Member
  • ***
  • Posts: 479
Re: 900Mbps+ Single thread throughput testing
« Reply #6 on: November 03, 2022, 02:15:58 PM »

To add... I'd also recommend checking out the following to see who an ISP peers with:
- https://bgp.tools/ (I prefer this one as some data is real-time)
- https://bgp.he.net/ (Can be delayed by 24-48 hours)

I wouldn't pay too much attention to this. This only covers public peering sessions and it's very situational depending on the ISP how much of their edge is via public peering, how much private peering, not shown on those, how much on-net CDNs are handling and how much is IP transit, also not shown on those.

Hop count is irrelevant so not directly peering isn't a big deal and can occasionally be problematic. Capacity and latency are much better guides than which AS is connected to which and given transit is charged by the megabit per second per month it's in their interests to ensure that there's no congestion either on there network or on their connections to ISPs. ISPs congesting their transit are breaking their agreements and will be admonished to upgrade by the transit provider.
Logged
YouFibre You8000 customer: symmetrical 8 Gbps.

Yes, more money than sense. Story of my life.

Ixel

  • Kitizen
  • ****
  • Posts: 1282
Re: 900Mbps+ Single thread throughput testing
« Reply #7 on: November 03, 2022, 02:33:19 PM »

I wouldn't pay too much attention to this. This only covers public peering sessions and it's very situational depending on the ISP how much of their edge is via public peering, how much private peering, not shown on those, how much on-net CDNs are handling and how much is IP transit, also not shown on those.

Hop count is irrelevant so not directly peering isn't a big deal and can occasionally be problematic. Capacity and latency are much better guides than which AS is connected to which and given transit is charged by the megabit per second per month it's in their interests to ensure that there's no congestion either on there network or on their connections to ISPs. ISPs congesting their transit are breaking their agreements and will be admonished to upgrade by the transit provider.

Nevertheless, it's not irrelevant or entirely useless information. I'd consider it to be more on the side of potentially supplemental. PeeringDB does list capacities of public peering exchange points, similarly to how some similar information is shown under the IX tab on bgp.tools for a specific ASN. PeeringDB has an advantage of showing a list of private peering facilities, notes and, if provided on the profile, the estimated traffic levels and traffic ratio.
« Last Edit: November 03, 2022, 02:40:19 PM by Ixel »
Logged

XGS_Is_On

  • Reg Member
  • ***
  • Posts: 479
Re: 900Mbps+ Single thread throughput testing
« Reply #8 on: November 03, 2022, 03:34:25 PM »

Someone like Virgin Media, AS 5089, really breaks this. Lots of private peering not mentioned, mostly IP transit as they're a tier 1.

Certainly has interesting information, just trying to ensure people don't consider it authoritative as far as an ISP's connectivity goes. It won't cover the majority of any ISP's capacity.
Logged
YouFibre You8000 customer: symmetrical 8 Gbps.

Yes, more money than sense. Story of my life.

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7414
  • VM Gig1 - AAISP CF
Re: 900Mbps+ Single thread throughput testing
« Reply #9 on: November 03, 2022, 06:14:31 PM »

The slow peak time speeds reported yesterday for clouvider, its 110ms latency right now from aaisp.

Code: [Select]
Tracing route to lon.speedtest.clouvider.net [5.180.211.133]
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  192.168.1.1
  2     9 ms     9 ms     8 ms  218.53.155.90.in-addr.arpa [90.155.53.218]
  3     8 ms     8 ms     8 ms  k-aimless.thn.aa.net.uk [90.155.53.101]
  4     9 ms     9 ms     9 ms  linx-lon1.thn2.peering.clouvider.net [195.66.227.14]
  5     9 ms     9 ms     9 ms  10.1.10.77
  6     9 ms    41 ms    12 ms  10.106.66.255
  7     *        *        *     Request timed out.
  8     *        *        *     Request timed out.
  9     *        *        *     Request timed out.
 10   110 ms   109 ms   110 ms  10.1.10.19
 11   100 ms   100 ms    99 ms  10.1.10.80
 12   110 ms   131 ms   110 ms  185.245.80.0
 13  2528 ms  2118 ms  2154 ms  185.245.80.1
 14   103 ms   102 ms    98 ms  5.180.211.133
Logged

EC300

  • Member
  • **
  • Posts: 47
Re: 900Mbps+ Single thread throughput testing
« Reply #10 on: November 03, 2022, 06:42:16 PM »

The slow peak time speeds reported yesterday for clouvider, its 110ms latency right now from aaisp.

Code: [Select]
Tracing route to lon.speedtest.clouvider.net [5.180.211.133]
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  192.168.1.1
  2     9 ms     9 ms     8 ms  218.53.155.90.in-addr.arpa [90.155.53.218]
  3     8 ms     8 ms     8 ms  k-aimless.thn.aa.net.uk [90.155.53.101]
  4     9 ms     9 ms     9 ms  linx-lon1.thn2.peering.clouvider.net [195.66.227.14]
  5     9 ms     9 ms     9 ms  10.1.10.77
  6     9 ms    41 ms    12 ms  10.106.66.255
  7     *        *        *     Request timed out.
  8     *        *        *     Request timed out.
  9     *        *        *     Request timed out.
 10   110 ms   109 ms   110 ms  10.1.10.19
 11   100 ms   100 ms    99 ms  10.1.10.80
 12   110 ms   131 ms   110 ms  185.245.80.0
 13  2528 ms  2118 ms  2154 ms  185.245.80.1
 14   103 ms   102 ms    98 ms  5.180.211.133

Seeing the same on my AAISP line:

Code: [Select]
Pinging 5.180.211.133 with 32 bytes of data:
Reply from 5.180.211.133: bytes=32 time=99ms TTL=56
Reply from 5.180.211.133: bytes=32 time=104ms TTL=56
Reply from 5.180.211.133: bytes=32 time=108ms TTL=56
Reply from 5.180.211.133: bytes=32 time=109ms TTL=56

Ping statistics for 5.180.211.133:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 99ms, Maximum = 109ms, Average = 105ms

Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: 900Mbps+ Single thread throughput testing
« Reply #11 on: November 04, 2022, 06:28:27 AM »

You could ask AAISP what sort of arrangements they have and speeds you might expect to those destinations, they like techy questions.
They do have direct link with Amazon as they told me this (and all these tests I was running to AWS are running over that link), though I don't know how much.
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: 900Mbps+ Single thread throughput testing
« Reply #12 on: November 04, 2022, 07:34:30 AM »

So, had some interesting results in.
@Chrysalis 's results against my servers were not overly disimilar to mine, though in their case the BBR didn't seem to improve matters. 
AAISP got a staffer to run tests on their line, and one on FTTP BTW did manage on the two london AWS servers to hit single thread line rates, and the Frankfurt ones were not far off.

So this gave me a thread to pull on as that is a similar connection but better performance.

The testing I had been doing previous was predominantly on the router, logged in via SSH. (Unifi Dream Machine SE), where AAISP's server would be line rate, but reduced to the AWS servers, improved with BBR.

I installed Ubuntu on the gigabit laptop I have here so I could use that as a PPPoE client.
From the Ubuntu laptop, routing via the Dream Machine as a DHCP client, even the single thread to the AAISP speedtest server was down to about 600Mbps (though oddly, the single thread test to AWS london non-BBR was notably faster than this).  Wired clients however can achieve line rate though in multithreaded tests.

So at this point it looks like the setup is on the edge once the overhead of routing through the Dream machine happens.

Running PPPoE on the laptop, my results are much more like from their staffer, the two London tests and even the Frankfurt BBR being very close to line rate.  The only odd anomaly is that it takes a while for the AA speedtest to "ramp up" to line rate often (maybe 10-15s)

I also used the same laptop and routed via the AAISP Technicolor router and it's similar to PPPoE on the laptop, though seems more prone to having the connection startup ramp effect. 

I think what this says to me at the moment is that for the gigabit single threads to make it all the way through from client to server everything has to be just right, and the Dream Machine doesn't really seem to be there, particularly when shoveling data between interfaces.  It can do the gigabit speed across multiple connections, but a single connection seems to have some kind of variability preventing achieving full rates.  I'm not sure what that ramping behaviour is, but as I was just running desktop Ubuntu, with GUI etc, I wouldn't really like to bet what is going on there.  It was seen on both the PPPoE on the laptop and the Technicolor, so perhaps it is a function of the laptop / ethernet card, but not when running the iperf3 tests on the Dream Machine router itself (though I did see it with a Hetzner file download).  And I have seen downloading big files from Hetzner to an AWS ramp like that over a large number of seconds, so it's not unique to this network or equipment.  It was interesting that testing from my laptop via the Dream Machine router was a bit faster to my AWS London cubic iperf3 server than AAISPs own server, I don't know if that points to any possible tweaks to that server.

I think what I'm going to do is try and get another router setup and retest with that, it will be a while before that happens.

Been very impressed with AAISPs response to this, they've been very helpful.
« Last Edit: November 04, 2022, 08:05:40 AM by bogof »
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: 900Mbps+ Single thread throughput testing
« Reply #13 on: November 04, 2022, 08:08:17 AM »

Seeing the same on my AAISP line:

Code: [Select]
Pinging 5.180.211.133 with 32 bytes of data:
Reply from 5.180.211.133: bytes=32 time=99ms TTL=56
Reply from 5.180.211.133: bytes=32 time=104ms TTL=56
Reply from 5.180.211.133: bytes=32 time=108ms TTL=56
Reply from 5.180.211.133: bytes=32 time=109ms TTL=56

Ping statistics for 5.180.211.133:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 99ms, Maximum = 109ms, Average = 105ms
I'll look again this evening, but I note that  iperf3 tests from Clouvider to an AWS instance also got much slower at peak times, so I don't think it's unique to AAISP.

Edit: no need to wait til this evening, does look like that London clouvider server is a bit under the weather or poorly connected:

London server:
Code: [Select]
ubuntu@ip-172-31-15-201:~$ iperf3 -R -c lon.speedtest.clouvider.net -p 5209
Connecting to host lon.speedtest.clouvider.net, port 5209
Reverse mode, remote host lon.speedtest.clouvider.net is sending
[  5] local 172.31.15.201 port 45954 connected to 5.180.211.133 port 5209
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  23.9 MBytes   201 Mbits/sec                 
[  5]   1.00-2.00   sec  26.0 MBytes   218 Mbits/sec                 
[  5]   2.00-3.00   sec  26.5 MBytes   222 Mbits/sec                 
[  5]   3.00-4.00   sec  53.0 MBytes   445 Mbits/sec                 
[  5]   4.00-5.00   sec  64.1 MBytes   538 Mbits/sec                 
[  5]   5.00-6.00   sec  42.6 MBytes   357 Mbits/sec                 
[  5]   6.00-7.00   sec  34.3 MBytes   288 Mbits/sec                 
[  5]   7.00-8.00   sec  46.6 MBytes   391 Mbits/sec                 
[  5]   8.00-9.00   sec  35.0 MBytes   294 Mbits/sec                 
[  5]   9.00-10.00  sec  60.1 MBytes   504 Mbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   413 MBytes   347 Mbits/sec    0             sender
[  5]   0.00-10.00  sec   412 MBytes   346 Mbits/sec                  receiver

iperf Done.

VS Manchester:
Code: [Select]
ubuntu@ip-172-31-15-201:~$ iperf3 -R -c man.speedtest.clouvider.net -p 5209
Connecting to host man.speedtest.clouvider.net, port 5209
Reverse mode, remote host man.speedtest.clouvider.net is sending
[  5] local 172.31.15.201 port 55526 connected to 103.214.44.130 port 5209
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   239 MBytes  2.01 Gbits/sec                 
[  5]   1.00-2.00   sec   347 MBytes  2.91 Gbits/sec                 
[  5]   2.00-3.00   sec   355 MBytes  2.98 Gbits/sec                 
[  5]   3.00-4.00   sec   355 MBytes  2.98 Gbits/sec                 
[  5]   4.00-5.00   sec   355 MBytes  2.97 Gbits/sec                 
[  5]   5.00-6.00   sec   354 MBytes  2.97 Gbits/sec                 
[  5]   6.00-7.00   sec   354 MBytes  2.97 Gbits/sec                 
[  5]   7.00-8.00   sec   354 MBytes  2.97 Gbits/sec                 
[  5]   8.00-9.00   sec   354 MBytes  2.97 Gbits/sec                 
[  5]   9.00-10.00  sec   354 MBytes  2.97 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.04  sec  3.34 GBytes  2.86 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  3.34 GBytes  2.87 Gbits/sec                  receiver

iperf Done.
« Last Edit: November 04, 2022, 09:15:19 AM by bogof »
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7414
  • VM Gig1 - AAISP CF
Re: 900Mbps+ Single thread throughput testing
« Reply #14 on: November 04, 2022, 10:08:18 AM »

That higher latency isnt unique to AAISP, the trace I posted showed the jump well after it left AAISP's area of control.  I pinged it today on VM and was 50ms and jumping around, when not long ago was 23ms.
Logged
Pages: [1] 2 3
 

anything