Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: 1 [2]

Author Topic: Speed testers - yet again, groan - thinkbroadband ‘special’  (Read 4447 times)

d2d4j

  • Kitizen
  • ****
  • Posts: 1103
Re: Speed testers - yet again, groan - thinkbroadband ‘special’
« Reply #15 on: November 17, 2018, 09:07:58 PM »

Hi

@chrysalis - the one I posted was from 1 node (IPv4 only) from our network, which our networks cover many areas.

Here is another taken from a different node (IPv6 and IPv4) but going through the same FTTC 80/20

Many thanks

John

IPV6



IPV4


Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Speed testers - yet again, groan - thinkbroadband ‘special’
« Reply #16 on: November 17, 2018, 09:35:39 PM »

I got a really large gap between single and multithreaded results, which to me shows it is all about the testing methodology’s inadequacy - it’s measuring their chosen protocol’s performance, not the link. Having said that, I’m not so surprised as I would think there is a chance that their servers are seeing out-of-order packet arrival times due to my four pipes and maybe that is freaking their TCP implementation out?

If I had had any wit at all, I would have posted a link to the specific correct page, but the url looked somehow weird to me, whatever that means, and I somehow got it into my head that it was one of the personal or per-session calculated ones, urls that cannot successfully be given to someone else as they are meaningless outside the context of one session or when away from the original user. Duh.

My latest test results:


Notice the huge variation in the upstream. These are both runs of the ‘special’ test, so I was wrong earlier: the difference that I saw originally is just between one run and the next, not due to the difference between types of tests. Regarding the upload: one is a bit exaggerated and the other is way, way too low. The downstream numbers are about right. One modem is swapped out at the moment for a spare DLink DSL-320B-Z1 instead of the usual ZyXEL VMG 1312-B10A and the downstream sync rate is about 380k lower than normal, so this means that the downstream test result shown here is expected to be down by about 320kbps.
« Last Edit: November 17, 2018, 11:22:44 PM by Weaver »
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7382
  • VM Gig1 - AAISP L2TP
Re: Speed testers - yet again, groan - thinkbroadband ‘special’
« Reply #17 on: November 18, 2018, 07:12:05 AM »

bonding can have performance implications tho weaver which the test simply plays out, too many people think if a test gives results they dont like then the test is broken, but of course never rely on a single means of testing your connection, if you ever get the means to then a single connection providing 10/2 will outdo a bonded 10/2.

playing with your congestion provider may make your upstream more consistent and to answer your earlier question both live and non live streams are single threaded so the likes of youtube and netflix as well as iplayer are single threaded. :)
« Last Edit: November 18, 2018, 07:20:41 AM by Chrysalis »
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: Speed testers - yet again, groan - thinkbroadband ‘special’
« Reply #18 on: November 18, 2018, 08:35:45 AM »

I do get excellent efficiency even with just a single TCP connection sometimes, luckily. I tried the thinkbroadband simple downloads of large test files and timed them and the results were really good - I’d have to look back at an old thread to find the numbers, but I was pleasantly surprised by how well bonding works. A lot could depend on how intelligent a receiving TCP system is unless the Firebricks go out of their way to ensure that a receiver sees packets arrive in the order that the receiver expects when distributed across n pipes, even all owing for possible differing speeds of each pipe. A Firebrick could calculate the correct time to start sending each packet if it knows the speed of each pipe and thus make them arrive in the order receivers feel happy with. If it just sends each packet as soon as possible though, some packets are going to arrive too early relative to others, depending on packet length, link speed and the amount of existing stuff in the queue into the pipe causing ingress queueing delay. There is quite a bit of freedom in possible design decisions, and I do wonder what the correct answers might be, given a possible trade-off between wasting link capacity by not keeping each link always busy vs making receiving TCP happy, say,if it is indeed TCP that you are using at a particular time. Unfortunately, so I have been told, the Brick doesn’t hold onto packets to time their transmission. A ‘flow-aware’ queue manager would be the thing and it could try and keep up full link utilisation efficiency whilst keeping relative arrivals within each flow to something that is nit going to upset receivers too much, and if there are several flows, then that helps a lot as it give the queue manager more possible choices in packet ordering.
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7382
  • VM Gig1 - AAISP L2TP
Re: Speed testers - yet again, groan - thinkbroadband ‘special’
« Reply #19 on: November 18, 2018, 10:23:55 AM »

Hi

@chrysalis - the one I posted was from 1 node (IPv4 only) from our network, which our networks cover many areas.


Yeah I have servers which will probably get similar results, in some cases its optimal to have small RWIN, which will of course reduce single threaded speeds.  I expect the node you tested from had a small RWIN buffer hence the result you got.
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7382
  • VM Gig1 - AAISP L2TP
Re: Speed testers - yet again, groan - thinkbroadband ‘special’
« Reply #20 on: November 18, 2018, 10:28:09 AM »

I do get excellent efficiency even with just a single TCP connection sometimes, luckily. I tried the thinkbroadband simple downloads of large test files and timed them and the results were really good - I’d have to look back at an old thread to find the numbers, but I was pleasantly surprised by how well bonding works. A lot could depend on how intelligent a receiving TCP system is unless the Firebricks go out of their way to ensure that a receiver sees packets arrive in the order that the receiver expects when distributed across n pipes, even all owing for possible differing speeds of each pipe. A Firebrick could calculate the correct time to start sending each packet if it knows the speed of each pipe and thus make them arrive in the order receivers feel happy with. If it just sends each packet as soon as possible though, some packets are going to arrive too early relative to others, depending on packet length, link speed and the amount of existing stuff in the queue into the pipe causing ingress queueing delay. There is quite a bit of freedom in possible design decisions, and I do wonder what the correct answers might be, given a possible trade-off between wasting link capacity by not keeping each link always busy vs making receiving TCP happy, say,if it is indeed TCP that you are using at a particular time. Unfortunately, so I have been told, the Brick doesn’t hold onto packets to time their transmission. A ‘flow-aware’ queue manager would be the thing and it could try and keep up full link utilisation efficiency whilst keeping relative arrivals within each flow to something that is nit going to upset receivers too much, and if there are several flows, then that helps a lot as it give the queue manager more possible choices in packet ordering.

The congestion provider would be on the endpoint you testing from not the firebrick, e.g. on windows CTCP is better than the default newreno. On linux HTCP, or westwood are both good choices, CDG is a new congestion provider which you might have access to if you have a new kernel.  These primarily control upstream traffic (uploads) from the endpoint, how fast it ramps up, how polite it is against other tcp stream if congestion, and how quick it recovers from packet loss.
Logged
Pages: 1 [2]