Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: 1 2 [3] 4

Author Topic: Thinking over options  (Read 5470 times)

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7409
  • VM Gig1 - AAISP CF
Re: Thinking over options
« Reply #30 on: November 01, 2022, 06:39:35 PM »

Well that endpoint is not something we control it could be overloaded so maybe when you tested it was.  Once speeds get as high as they are now its harder to provide that consistent throughput.  Its one thing at 70mbit, another entirely at 700.

How is the link I asked you to test on my server does that choke back down as well?
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: Thinking over options
« Reply #31 on: November 01, 2022, 07:45:07 PM »

That file doesn't seem big enough really to form much of an opinion, but they do look a bit different. 
The Hezner one usually starts off at around 40MB/sec, then after 20s or so might clim up then drop sharply, then creep up, then drop, etc.
Your one starts a bit slower eventually getting up to around 40MB/sec, no obvious drop, but by then it's just about to finish downloading :)  (it's only about 10s to download the 350MB or so).
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7409
  • VM Gig1 - AAISP CF
Re: Thinking over options
« Reply #32 on: November 02, 2022, 06:25:00 AM »

Maybe email hetzner to ask about their setup.

I can I suppose give you a bigger file as you seem to want a long test but bear in mind my server is only on a gigabit port and its using spinning rust for i/o, so test the file at least twice so its cached the second time.

For reference I get about 600mbit single threaded of that server now. Hetzner's is a bit faster and I assumed they probably using at least a 10gig port and ssd's.

I just downloaded a gta5 update over steam, with steam constraints removed in pfsense and no throttle set in the steam client, and it literally filled the gigabit.  No packet loss during the download just slightly higher latency.

I think what I want to do is keep my aaisp ip's using their L2TP service and cancel the DSL.  I wont funnel everything through it as the 200mbit cap, but can use it for anything where I want to use their network and/or I need static IP addressing.
« Last Edit: November 02, 2022, 06:28:01 AM by Chrysalis »
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: Thinking over options
« Reply #33 on: November 02, 2022, 09:18:15 AM »

Maybe email hetzner to ask about their setup.

I can I suppose give you a bigger file as you seem to want a long test but bear in mind my server is only on a gigabit port and its using spinning rust for i/o, so test the file at least twice so its cached the second time.

For reference I get about 600mbit single threaded of that server now. Hetzner's is a bit faster and I assumed they probably using at least a 10gig port and ssd's.

I just downloaded a gta5 update over steam, with steam constraints removed in pfsense and no throttle set in the steam client, and it literally filled the gigabit.  No packet loss during the download just slightly higher latency.

I think what I want to do is keep my aaisp ip's using their L2TP service and cancel the DSL.  I wont funnel everything through it as the 200mbit cap, but can use it for anything where I want to use their network and/or I need static IP addressing.
Don't trouble yourself.  I think I probably should spend my time more productively than looking at this...  But anyhow...
I'm not sure Hetzner are a great test site from the UK.  I started up a 25Gbit 24core Amazon AWS instance, that often takes 20s or so to ramp up from 5-7MB/s from Hetzner's files, up to eventually getting stable at 165MB/s.  Probably a bit close to the wire to be using for tests.
A different file from Vodafone seems to regularly transfer over 400MB/s.  (http://212.183.159.230/1GB.zip).  That file also seems to be quite variable for me on AAISP here, often backing off after ramping up.

I note that from testing from the AWS instance, Community Fibre appear to have some of the best connectivity (speed testing at almost 25G down! speedtest -s 30690 https://www.speedtest.net/result/c/f95f02ca-dc11-47b6-a8dc-abb9bd0e4041 ).  Other servers often only have 10G (eg Zen) or even 1G (eg Voicehost Norwich).
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7409
  • VM Gig1 - AAISP CF
Re: Thinking over options
« Reply #34 on: November 02, 2022, 09:44:44 AM »

Since I use hetzner for a few things thats why I used them for some tests and when I was testing my aaisp line, I already knew how hetzner behaved normally from the work I do on my servers and data transfers. 

I tested your vodafone link and on my line its actually slower than hetzner, about 400mbit.  Also ramps up slower.  But to linode its very fast almost 2gbit/sec.  You win some and lose some and I consider that very good performance still.

To give you feedback I tested on a uk linode instance as well for both the hetzner test files and my server, the hetzner test files did the same as what you described a initial slower throughput but then suddenly jumps up after about 5 seconds.  However when I tested a file on my server it ramped up as normal.  The feedback you gave me on my server, I consider reasonable performance from both aaisp side and my side.

Been honest I prefer to test using my server's as I control the endpoint.  I have always preferred using performance from endpoints I control over third party speedtesters, with the third party speedtester I respect the most probably been tbb due to its single threaded aspect.
« Last Edit: November 02, 2022, 10:24:26 AM by Chrysalis »
Logged

Ixel

  • Kitizen
  • ****
  • Posts: 1282
Re: Thinking over options
« Reply #35 on: November 02, 2022, 12:57:33 PM »

Hetzner's never been fast for me on my connection. Usually averages around 25 to 30MB/sec, occasionally goes up to perhaps 60MB/sec or so briefly.

The 1GB.zip from the Vodafone URL appears to easily reach and maintain virtually full speed on my connection.

If I suspect that there's a single thread throughput issue then I tend to try a 10GB file at https://as62240.net/speedtest (London, Clouvider). If there's no issues then it usually consistently maintains virtually full speed on my connection (around 110MB/sec or nearly 900 megabits/sec).
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7409
  • VM Gig1 - AAISP CF
Re: Thinking over options
« Reply #36 on: November 02, 2022, 01:04:22 PM »

I suppose we have different definitions of fast I consider 30MB/sec fast.

That is a nice link though for worldwide testing, for reference I got 924mbit on the London file and about 450 on the Frankfurt test which suggests Hetzner performance is fine.
« Last Edit: November 02, 2022, 01:07:03 PM by Chrysalis »
Logged

Ixel

  • Kitizen
  • ****
  • Posts: 1282
Re: Thinking over options
« Reply #37 on: November 02, 2022, 01:08:50 PM »

What I guess I meant to say is that it's not a somewhat reliable indicator of single thread throughput issues if you have a connection that goes 500 megabits or beyond. Perhaps using the word 'fast' was a poor choice by me, as 30MB/sec is still good.
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7409
  • VM Gig1 - AAISP CF
Re: Thinking over options
« Reply #38 on: November 02, 2022, 01:16:25 PM »

Oh of course yeah, no one source alone isn't really a reliable indicator, and if its the access part of the connection you testing rather than the wider flow management across the internet then I guess you want to see if it can be filled single threaded.  That UK link is the first time I have got over 900 single threaded to my PC, I havent thought it a realistic expectation, fast.com single threaded is about 700 and tbb single threaded about 800 of which both I expect are low rtt. lon.speedtest.clouvider.net for me is 23ms.

I suppose given I already know that these kind of speeds are harder to stick over longer distances then seeing 400-600 from Germany isn't seen as a problem in my eyes.
« Last Edit: November 02, 2022, 03:05:28 PM by Chrysalis »
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: Thinking over options
« Reply #39 on: November 02, 2022, 02:43:14 PM »

Hetzner's never been fast for me on my connection. Usually averages around 25 to 30MB/sec, occasionally goes up to perhaps 60MB/sec or so briefly.

The 1GB.zip from the Vodafone URL appears to easily reach and maintain virtually full speed on my connection.

If I suspect that there's a single thread throughput issue then I tend to try a 10GB file at https://as62240.net/speedtest (London, Clouvider). If there's no issues then it usually consistently maintains virtually full speed on my connection (around 110MB/sec or nearly 900 megabits/sec).
Great resource for files, thanks!

AAISP did set up an iperf3 instance on their network and that works fine at line rate single thread, which is great.
Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: Thinking over options
« Reply #40 on: November 02, 2022, 03:00:23 PM »

So I'd be interested to know what folk typically see from" iperf3 -R -c lon.speedtest.clouvider.net -p 5209 ". (p 5000-5009, I picked 5009)

I get about a bit over half line rate to there, 550-600Mbps.

From AWS I get about 4000Mbps to the same server, so it's clearly capable.
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7409
  • VM Gig1 - AAISP CF
Re: Thinking over options
« Reply #41 on: November 02, 2022, 03:08:39 PM »

So I'd be interested to know what folk typically see from" iperf3 -R -c lon.speedtest.clouvider.net -p 5209 ". (p 5000-5009, I picked 5009)

I get about a bit over half line rate to there, 550-600Mbps.

From AWS I get about 4000Mbps to the same server, so it's clearly capable.


Tested for you, locally on pfsense just shy of 600.  You getting great performance. :)

Code: [Select]
# iperf3 -R -c lon.speedtest.clouvider.net -p 5209
Connecting to host lon.speedtest.clouvider.net, port 5209
Reverse mode, remote host lon.speedtest.clouvider.net is sending
[  5] local <ip> port 58364 connected to 5.180.211.133 port 5209
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  33.5 MBytes   281 Mbits/sec                 
[  5]   1.00-2.00   sec  70.1 MBytes   588 Mbits/sec                 
[  5]   2.00-3.00   sec  70.3 MBytes   590 Mbits/sec                 
[  5]   3.00-4.00   sec  70.9 MBytes   595 Mbits/sec                 
[  5]   4.00-5.00   sec  70.1 MBytes   588 Mbits/sec                 
[  5]   5.00-6.00   sec  69.7 MBytes   584 Mbits/sec                 
^C[  5]   6.00-6.91   sec  64.9 MBytes   595 Mbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-6.91   sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-6.91   sec   449 MBytes   545 Mbits/sec                  receiver
iperf3: interrupt - the client has terminated

If you are still bothered by this, ping the ip from AWS it may well just be going direct over a short peering link or something with extremely low single digit rtt.

with verbose enabled, to add a bit more info, confirms CDG (receiver not sender) and also that mss was kept at 1240 not negotiated upwards.  Was hoping to see the size of the congestion window (which limits throughput) but I think thats only shown on the sender side of iperf3.  The cwnd hitting its limit is the likely reason for the 600 top out. 

Code: [Select]
# iperf3 -V -R -c lon.speedtest.clouvider.net -p 5209
iperf 3.10.1
FreeBSD PFSENSE.home 12.3-STABLE FreeBSD 12.3-STABLE RELENG_2_6_0-n226742-1285d6d205f pfSense amd64
Control connection MSS 1240
Time: Wed, 02 Nov 2022 15:13:31 UTC
Connecting to host lon.speedtest.clouvider.net, port 5209
Reverse mode, remote host lon.speedtest.clouvider.net is sending
      Cookie: trqa6jlqz2grdavr4xwsumoqgioyk6nn2yb5
      TCP MSS: 1240 (default)
[  5] local <ip> port 65131 connected to 5.180.211.133 port 5209
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  31.7 MBytes   266 Mbits/sec                 
[  5]   1.00-2.00   sec  71.2 MBytes   597 Mbits/sec                 
[  5]   2.00-3.00   sec  71.1 MBytes   596 Mbits/sec                 
[  5]   3.00-4.00   sec  71.9 MBytes   603 Mbits/sec                 
[  5]   4.00-5.00   sec  71.3 MBytes   598 Mbits/sec                 
[  5]   5.00-6.00   sec  70.6 MBytes   592 Mbits/sec                 
[  5]   6.00-7.00   sec  71.8 MBytes   602 Mbits/sec                 
[  5]   7.00-8.00   sec  71.1 MBytes   596 Mbits/sec                 
[  5]   8.00-9.00   sec  71.3 MBytes   598 Mbits/sec                 
[  5]   9.00-10.00  sec  71.8 MBytes   602 Mbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   677 MBytes   568 Mbits/sec    0             sender
[  5]   0.00-10.00  sec   674 MBytes   565 Mbits/sec                  receiver
rcv_tcp_congestion cdg

iperf Done.

Also retested from browser on windows to make sure thats still at 940 and is.  So different negotiated cwnd probably.
« Last Edit: November 02, 2022, 03:22:15 PM by Chrysalis »
Logged

EC300

  • Member
  • **
  • Posts: 47
Re: Thinking over options
« Reply #42 on: November 02, 2022, 03:17:04 PM »

So I'd be interested to know what folk typically see from" iperf3 -R -c lon.speedtest.clouvider.net -p 5209 ". (p 5000-5009, I picked 5009)

I get about a bit over half line rate to there, 550-600Mbps.

From AWS I get about 4000Mbps to the same server, so it's clearly capable.

I'm getting no where near that (tried IPv4 and 6). (Line is 1000/120)

Code: [Select]
Connecting to host 5.180.211.133, port 5209
Reverse mode, remote host 5.180.211.133 is sending
[  4] local 192.168.1.2 port 57602 connected to 5.180.211.133 port 5209
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  22.5 MBytes   188 Mbits/sec
[  4]   1.00-2.00   sec  21.4 MBytes   180 Mbits/sec
[  4]   2.00-3.00   sec  22.0 MBytes   184 Mbits/sec
[  4]   3.00-4.00   sec  22.2 MBytes   185 Mbits/sec
[  4]   4.00-5.00   sec  22.2 MBytes   187 Mbits/sec
[  4]   5.00-6.00   sec  22.4 MBytes   188 Mbits/sec
[  4]   6.00-7.00   sec  21.7 MBytes   182 Mbits/sec
[  4]   7.00-8.00   sec  22.5 MBytes   189 Mbits/sec
[  4]   8.00-9.00   sec  22.0 MBytes   185 Mbits/sec
[  4]   9.00-10.00  sec  21.9 MBytes   183 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   223 MBytes   187 Mbits/sec    0             sender
[  4]   0.00-10.00  sec   221 MBytes   185 Mbits/sec                  receiver

iperf Done.

Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7409
  • VM Gig1 - AAISP CF
Re: Thinking over options
« Reply #43 on: November 02, 2022, 03:38:11 PM »

I will share this as well, an image of a multi threaded steam download effect on my pfsense kit.

Can see the load is spread very evenly across two threads on both wan and lan network interfaces, a fairly high itnerrupt load on the cpu, and over all just shy of 40% cpu utilisation.  Curious how how this would have been on gigabit PPPoE.

Logged

bogof

  • Reg Member
  • ***
  • Posts: 436
Re: Thinking over options
« Reply #44 on: November 02, 2022, 05:08:15 PM »

Thanks all for your test results... :)

I'm getting no where near that (tried IPv4 and 6). (Line is 1000/120)

Code: [Select]
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   223 MBytes   187 Mbits/sec    0             sender
[  4]   0.00-10.00  sec   221 MBytes   185 Mbits/sec                  receiver

iperf Done.
That does seem quite low, my most recent was even better than previous:
Code: [Select]
root@Home-Dream-Machine-SE:/ssd1/test# iperf3 -V -R -c lon.speedtest.clouvider.net -p 5209
iperf 3.1.3
Linux Home-Dream-Machine-SE 4.19.152-ui-alpine #4.19.152 SMP Mon Aug 1 14:24:56 CST 2022 aarch64
Time: Wed, 02 Nov 2022 16:32:24 GMT
Connecting to host lon.speedtest.clouvider.net, port 5209
Reverse mode, remote host lon.speedtest.clouvider.net is sending
      Cookie: Home-Dream-Machine-SE.1667406744.828
      TCP MSS: 1440 (default)
[  4] local xxxx port 42286 connected to 5.180.211.133 port 5209
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  67.2 MBytes   564 Mbits/sec
[  4]   1.00-2.00   sec  88.9 MBytes   745 Mbits/sec
[  4]   2.00-3.00   sec  82.9 MBytes   696 Mbits/sec
[  4]   3.00-4.00   sec  88.1 MBytes   739 Mbits/sec
[  4]   4.00-5.00   sec  93.8 MBytes   787 Mbits/sec
[  4]   5.00-6.00   sec   101 MBytes   848 Mbits/sec
[  4]   6.00-7.00   sec  94.5 MBytes   792 Mbits/sec
[  4]   7.00-8.00   sec  92.4 MBytes   775 Mbits/sec
[  4]   8.00-9.00   sec   101 MBytes   847 Mbits/sec
[  4]   9.00-10.00  sec  86.6 MBytes   727 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   899 MBytes   754 Mbits/sec  1126             sender
[  4]   0.00-10.00  sec   897 MBytes   753 Mbits/sec                  receiver
CPU Utilization: local/receiver 52.6% (2.8%u/49.8%s), remote/sender 0.8% (0.0%u/0.8%s)

This other one in Germany that is on a 40Gbps connection there does show some evidence of the "choking" I mentioned previous for Hetzner - ran it for 30s to see the drops a few times.
AWS instance to here is reliably at 1.3Gbps for a single thread.

Code: [Select]
root@Home-Dream-Machine-SE:/ssd1/test# iperf3 -R -V -t 30 -c speedtest.wtnet.de -p 5303
iperf 3.1.3
Linux Home-Dream-Machine-SE 4.19.152-ui-alpine #4.19.152 SMP Mon Aug 1 14:24:56 CST 2022 aarch64
Time: Wed, 02 Nov 2022 17:05:24 GMT
Connecting to host speedtest.wtnet.de, port 5303
Reverse mode, remote host speedtest.wtnet.de is sending
      Cookie: Home-Dream-Machine-SE.1667408724.796
      TCP MSS: 1440 (default)
[  4] local xxxx port 52182 connected to 213.209.106.95 port 5303
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 30 second test
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  61.0 MBytes   512 Mbits/sec
[  4]   1.00-2.00   sec  65.6 MBytes   551 Mbits/sec
[  4]   2.00-3.00   sec  63.8 MBytes   535 Mbits/sec
[  4]   3.00-4.00   sec  62.1 MBytes   521 Mbits/sec
[  4]   4.00-5.00   sec  63.4 MBytes   532 Mbits/sec
[  4]   5.00-6.00   sec  58.9 MBytes   494 Mbits/sec
[  4]   6.00-7.00   sec  57.9 MBytes   485 Mbits/sec
[  4]   7.00-8.00   sec  63.2 MBytes   530 Mbits/sec
[  4]   8.00-9.00   sec  29.8 MBytes   250 Mbits/sec
[  4]   9.00-10.00  sec  23.6 MBytes   198 Mbits/sec
[  4]  10.00-11.00  sec  27.3 MBytes   229 Mbits/sec
[  4]  11.00-12.00  sec  37.0 MBytes   310 Mbits/sec
[  4]  12.00-13.00  sec  48.8 MBytes   409 Mbits/sec
[  4]  13.00-14.00  sec  55.2 MBytes   463 Mbits/sec
[  4]  14.00-15.00  sec  58.8 MBytes   494 Mbits/sec
[  4]  15.00-16.00  sec  48.7 MBytes   409 Mbits/sec
[  4]  16.00-17.00  sec  43.8 MBytes   368 Mbits/sec
[  4]  17.00-18.00  sec  47.5 MBytes   398 Mbits/sec
[  4]  18.00-19.00  sec  45.8 MBytes   384 Mbits/sec
[  4]  19.00-20.00  sec  60.4 MBytes   507 Mbits/sec
[  4]  20.00-21.00  sec  24.3 MBytes   204 Mbits/sec
[  4]  21.00-22.00  sec  23.9 MBytes   201 Mbits/sec
[  4]  22.00-23.00  sec  28.8 MBytes   242 Mbits/sec
[  4]  23.00-24.00  sec  38.9 MBytes   326 Mbits/sec
[  4]  24.00-25.00  sec  41.6 MBytes   349 Mbits/sec
[  4]  25.00-26.00  sec  48.6 MBytes   408 Mbits/sec
[  4]  26.00-27.00  sec  56.5 MBytes   474 Mbits/sec
[  4]  27.00-28.00  sec  61.3 MBytes   514 Mbits/sec
[  4]  28.00-29.00  sec  62.7 MBytes   526 Mbits/sec
[  4]  29.00-30.00  sec  64.4 MBytes   540 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-30.00  sec  1.48 GBytes   424 Mbits/sec  4112             sender
[  4]   0.00-30.00  sec  1.44 GBytes   413 Mbits/sec                  receiver
CPU Utilization: local/receiver 28.4% (1.4%u/27.0%s), remote/sender 0.1% (0.0%u/0.1%s)

iperf Done.
Overall though it's pretty hard not to look at this and think it's a pretty great connection, I agree @Chrysalis.
« Last Edit: November 03, 2022, 01:10:55 PM by bogof »
Logged
Pages: 1 2 [3] 4