Kitz Forum
Internet => General Internet => Topic started by: Weaver on November 16, 2018, 02:22:29 AM
-
Buried in the thinkbroadband website there is a page that says
For those who want to run a test that is more sensitive to provider congestion try this special test version.
The words ‘special test version’ form a link.
When I tried this, I found that the upload test result from the normal tester was horrendously inaccurate, under-reporting by 50%. It gave me an upload figure of 0.6 - 0.8 Mbps where’s the truth is around 1.5 Mbps, but when I chose the special test version mysteriously the upload figure was then roughly correct, reporting 1.6 Mbps for IPv4 upload, which I think is exaggerated a little.
So it seems that the special test version whatever that means is the one to go for because the normal one is utterly broken. Don’t know what on earth they are doing when one version of the tester can come out with numbers that are more than double those reported by another version, supposedly measuring the same thing.
-
Curiosity led me to try their use, but I never, hardly unusual, worked out how to use them with IPV4.
-
Doesn't that test just run one thread, as opposed to the usual test which will also use x6.
Single thread tests are quite rare as most speed testers tend to use multi-threaded.
Single thread speedtests are much better to diagnose congestion issues, whereas you would expect a multi threaded test to show near max throughput even if the SP has some slight congestion.
-
I personally just do lots of transfers from actual hosts on the internet.
I have a host I can download 1 or many files at once from. Hence single vs threaded.
I am starting to really detest "speed testers". They are completely synthetic.
-
Hi
I believe this has been covered many times previously
I also think most hosting is multithreaded so a single thread test is rare, and whilst it may show congestion it does not mean your connection is slower, due to multithreaded hosting.
Also, to me it is pointless to test to see if you have full bandwidth throughout the internet, your correct bandwidth is shown between your connection to your SP and that does not mean you attain this throughout the Internet due to once your connection leaves your SP networks, the SP has no control over other networks.
So my best advise to measure throughput (not connected speed) is to use the same Speedtest site for comparison to all tests you make, as throughput is fluid and there are many different reasons for speed variations, such as your computer, anywhere between you and the test site and the test site resources etc...
I would advice not to get fixated over throughput speed as it is fluid
Many thanks
John
-
It is indeed good advice to use relative assessments within a single speedtester.
The ‘special’ tester cannot just be single threaded, from the graphs that it shows and the fact that the downstream test results for the single threaded test in the normal case were very different to the multi-thread case and both these numbers were also reported in the ‘special’ results.
-
The thinkbroadband tester is the one to go back to because it runs separate single and multi threaded tests - it's the difference between the two that can be the most informative.
-
d2d4j you sound like an isp spokesperson ;)
I wouldnt consider single threaded throughput as "rare".
Streaming services are the biggest legal consumption of bandwidth, and you know how many threads streaming services use? One.
-
It's very unusual for streaming to use all a lines bandwidth, especially for people on fibre or cable, so until single threaded drops below the speed needed for streaming, the result of a single threaded test isn't all that relevant. When it does the difference between the single and multi is then highly significant. That's why I stick to the TBB test - it covers all angles.
-
I am assuming that ‘streaming’ means live real-time video, not pre-sent, saved and delivered from gigantic buffer or a file and I assume that that means a fixed-rate protocol other than TCP?
So the techniques used by this speedtester presumably involve TCP, and therefore are not the same at all ?
When I look at stuff on Netflix or Amazon these services seem to pick some fixed rate which I assume is chosen according to the quality level, image size and res it has chosen, and then the server just sends data at that constant rate. In my particular experience the rate is way below the capacity of my pipe. It might be 3, 4 or 6 Mbps on my 10Mbps pipe. It isn’t trying to get anywhere near maxing out my link, that would make no sense unless it was trying to get maximum quality and match that to the pipe capacity, but then there would be a constant risk of failure if the link got busy it if there were errors so it would be madness and the only safe way is to run with a substantial buffer and at a rate well below that of the link.
When the services are offering downloads, these are best-effort max speed protocols, so probably TCP. The services I am familiar with such as Netflix appear to run several TCP connections at once, as they download multiple episodes of a series simultaneously, for example. Having multiple TCP connections on the go simultaneously is a good way of maxing out the link because if one TCP connections stalls, faltering for a while because of packet loss, then another will doubtless take advantage or at least will certainly be going ahead unaffected, so it counters the temporary speed loss by ensuring that at least something is always making progress. I have seen four transfers with movie download, and many more parallel transfers with FTP clients. Another reason that it is done could be to help in cases like my own, where the risk of packet reordering is a problem, because I have multiple physical pipes. Users with high latency may be helped by the multiple transfer strategy too, if the TCP implementations are not well tuned so as to use large windows keeping enough data in flight so that the link is completely filled with data at all times. In the ‘water pipe’ analogy, the pipe needs to be filled with water (data) and have ‘no air bubbles’ (time periods where there are gaps in the data in transit ).
-
Hi
@chrysalis, sorry no, we are not an ISP however we attained ESP status in the 90's and have maintained ESP ever since
I have just run the tests at TBB, for http and https, with results that surprised me slightly, as https I would not expect to be as low, but then there are a lot of known/unknown systems inbetween
My advise still remains as is though
Many thanks
John
T
http
(https://www.thinkbroadband.com/_assets/speedtest/button/1542469744720865955-mini.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542469744720865955)
https
(https://www.thinkbroadband.com/_assets/speedtest/button/1542469495328101455-mini.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542469495328101455)
FTTC 80/20
https
(https://www.thinkbroadband.com/_assets/speedtest/button/1542468235694904355-mini.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542468235694904355)
http
(https://www.thinkbroadband.com/_assets/speedtest/button/1542469435745317055.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542469435745317055)
-
It's very unusual for streaming to use all a lines bandwidth, especially for people on fibre or cable, so until single threaded drops below the speed needed for streaming, the result of a single threaded test isn't all that relevant. When it does the difference between the single and multi is then highly significant. That's why I stick to the TBB test - it covers all angles.
True you dont need 100s of mbits of second to stream, but I felt the need to point out it is single threaded. Some people get really low single threaded results below 10mbit/sec at which point streaming could be affected.
Personally if I see a big difference between single and multi threaded I will investigate it on my connection, with that said I havent ran a tbb speedtest for several months now, normally I would only run one if I notice bad performance on my connection or following a discussion on it.
Http downloads are also single threaded without a download manager and e.g. Microsoft still distribute their software over http(s).
FTP also single threaded by default.
Now do I agree with johns and yours advice? Yes and No, I feel its a way to shut people up and turn them away from reporting potential congestion, but if you have a really fast connection in the 100s of mbits/sec and single threaded is still above 50mbit/sec then it probably isnt a big issue.
e.g. I wouldnt be happy with the speedtest John just posted, single threaded of 10mbps followed by 17mps, pretty poor figures.
With all this said I feel tbb is still one of the best speedtesters out there, but you should never just rely on one single speedtest, as a transit/peering issue or server side problem could cause results on that test that are not relevant to other parts of the internet, other tests that support single stream are dslreports speedtest and speedof.me.
Bear in mind tho that slow single threaded can also be down to bad network equipment or bad network configuration as well. If its one of these tho I would expect the problem 24/7.
-
I really don't attach much relevance to the comparison of results from different throughput speed testers . . .
However, here is yet another one (http://uk-london.privateinternetaccess.com:8888/speedtest/) to add to the list.
-
not a fan of that test is multi threaded :(
well since we discussing it I ran the "special" test A/A on quality (latency during test) - Quality 0.10 (A) is better than average 0.47 (A) for VDSL2/FTTC
(https://www.thinkbroadband.com/_assets/speedtest/button/1542474613864427155.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542474613864427155)
-
Special test link: http://labs.thinkbroadband.com/speedtest/?site=omegaUEKWiwklw392
(https://www.thinkbroadband.com/_assets/speedtest/button/1542485183430615455.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542485183430615455)
-
Hi
@chrysalis - the one I posted was from 1 node (IPv4 only) from our network, which our networks cover many areas.
Here is another taken from a different node (IPv6 and IPv4) but going through the same FTTC 80/20
Many thanks
John
IPV6
(https://www.thinkbroadband.com/_assets/speedtest/button/1542487902488870055-mini.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542487902488870055)
IPV4
(https://www.thinkbroadband.com/_assets/speedtest/button/1542488598638716555-mini.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542488598638716555)
-
I got a really large gap between single and multithreaded results, which to me shows it is all about the testing methodology’s inadequacy - it’s measuring their chosen protocol’s performance, not the link. Having said that, I’m not so surprised as I would think there is a chance that their servers are seeing out-of-order packet arrival times due to my four pipes and maybe that is freaking their TCP implementation out?
If I had had any wit at all, I would have posted a link to the specific correct page, but the url looked somehow weird to me, whatever that means, and I somehow got it into my head that it was one of the personal or per-session calculated ones, urls that cannot successfully be given to someone else as they are meaningless outside the context of one session or when away from the original user. Duh.
My latest test results:
(https://www.thinkbroadband.com/_assets/speedtest/button/1542494325755553455.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542494325755553455)(https://www.thinkbroadband.com/_assets/speedtest/button/1542494608777675855.png) (https://www.thinkbroadband.com/speedtest/results.html?test=1542494608777675855)
Notice the huge variation in the upstream. These are both runs of the ‘special’ test, so I was wrong earlier: the difference that I saw originally is just between one run and the next, not due to the difference between types of tests. Regarding the upload: one is a bit exaggerated and the other is way, way too low. The downstream numbers are about right. One modem is swapped out at the moment for a spare DLink DSL-320B-Z1 instead of the usual ZyXEL VMG 1312-B10A and the downstream sync rate is about 380k lower than normal, so this means that the downstream test result shown here is expected to be down by about 320kbps.
-
bonding can have performance implications tho weaver which the test simply plays out, too many people think if a test gives results they dont like then the test is broken, but of course never rely on a single means of testing your connection, if you ever get the means to then a single connection providing 10/2 will outdo a bonded 10/2.
playing with your congestion provider may make your upstream more consistent and to answer your earlier question both live and non live streams are single threaded so the likes of youtube and netflix as well as iplayer are single threaded. :)
-
I do get excellent efficiency even with just a single TCP connection sometimes, luckily. I tried the thinkbroadband simple downloads of large test files (https://www.thinkbroadband.com/download) and timed them and the results were really good - I’d have to look back at an old thread to find the numbers, but I was pleasantly surprised by how well bonding works. A lot could depend on how intelligent a receiving TCP system is unless the Firebricks go out of their way to ensure that a receiver sees packets arrive in the order that the receiver expects when distributed across n pipes, even all owing for possible differing speeds of each pipe. A Firebrick could calculate the correct time to start sending each packet if it knows the speed of each pipe and thus make them arrive in the order receivers feel happy with. If it just sends each packet as soon as possible though, some packets are going to arrive too early relative to others, depending on packet length, link speed and the amount of existing stuff in the queue into the pipe causing ingress queueing delay. There is quite a bit of freedom in possible design decisions, and I do wonder what the correct answers might be, given a possible trade-off between wasting link capacity by not keeping each link always busy vs making receiving TCP happy, say,if it is indeed TCP that you are using at a particular time. Unfortunately, so I have been told, the Brick doesn’t hold onto packets to time their transmission. A ‘flow-aware’ queue manager would be the thing and it could try and keep up full link utilisation efficiency whilst keeping relative arrivals within each flow to something that is nit going to upset receivers too much, and if there are several flows, then that helps a lot as it give the queue manager more possible choices in packet ordering.
-
Hi
@chrysalis - the one I posted was from 1 node (IPv4 only) from our network, which our networks cover many areas.
Yeah I have servers which will probably get similar results, in some cases its optimal to have small RWIN, which will of course reduce single threaded speeds. I expect the node you tested from had a small RWIN buffer hence the result you got.
-
I do get excellent efficiency even with just a single TCP connection sometimes, luckily. I tried the thinkbroadband simple downloads of large test files (https://www.thinkbroadband.com/download) and timed them and the results were really good - I’d have to look back at an old thread to find the numbers, but I was pleasantly surprised by how well bonding works. A lot could depend on how intelligent a receiving TCP system is unless the Firebricks go out of their way to ensure that a receiver sees packets arrive in the order that the receiver expects when distributed across n pipes, even all owing for possible differing speeds of each pipe. A Firebrick could calculate the correct time to start sending each packet if it knows the speed of each pipe and thus make them arrive in the order receivers feel happy with. If it just sends each packet as soon as possible though, some packets are going to arrive too early relative to others, depending on packet length, link speed and the amount of existing stuff in the queue into the pipe causing ingress queueing delay. There is quite a bit of freedom in possible design decisions, and I do wonder what the correct answers might be, given a possible trade-off between wasting link capacity by not keeping each link always busy vs making receiving TCP happy, say,if it is indeed TCP that you are using at a particular time. Unfortunately, so I have been told, the Brick doesn’t hold onto packets to time their transmission. A ‘flow-aware’ queue manager would be the thing and it could try and keep up full link utilisation efficiency whilst keeping relative arrivals within each flow to something that is nit going to upset receivers too much, and if there are several flows, then that helps a lot as it give the queue manager more possible choices in packet ordering.
The congestion provider would be on the endpoint you testing from not the firebrick, e.g. on windows CTCP is better than the default newreno. On linux HTCP, or westwood are both good choices, CDG is a new congestion provider which you might have access to if you have a new kernel. These primarily control upstream traffic (uploads) from the endpoint, how fast it ramps up, how polite it is against other tcp stream if congestion, and how quick it recovers from packet loss.