Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: [1] 2

Author Topic: Stupid question  (Read 1054 times)

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Stupid question
« on: April 16, 2019, 12:41:49 PM »

Is there anything at all that I can do to improve my upstream speed ? I know it’s a stupid question.

I’m trying to really think creatively about this though.

  • Tweaking, tuning, hardware improvements that might affect upstream.
  • I am even wondering if it is possible to use 4G for upstream. I already have a USB 3G NIC, but despite the fact that the model of NIC is 3G only, a Huawei one, I know that 4G ones are available.
  • As well as that I already have a Solwise 4G router and I am hoping I can turn that into a modem. If not then I could address this with tunnelling, don’t know if that would be practical though, the downside might be too great.

    The Solwise router speaks 4G only and currently does NAT on one single IPv4 address, but I have a block of eight lying around which I could use for it instead and perhaps get it talk to my Firebrick. It speaks only wireless (@2.4GHz) so I would have to get some hardware to talk to that. So many hassles. Probably too many, better to start clean with the right kit.
  • other kit?

I don’t know if there is any annex M available ? Can I check ? Otherwise I will email sales at my ISP AA.

Tuning ideas if there are any first I suppose because they are likely to be cheap. And non-existent. ::) ;D

Logged

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Stupid question
« Reply #1 on: April 16, 2019, 02:00:23 PM »

I see I have been down this road before. https://forum.kitz.co.uk/index.php/topic,21765.msg375735.html#msg375735

I’m wondering (a) if I could have missed anything, and (b) am widening the range of my thinking.
Logged

re0

  • Reg Member
  • ***
  • Posts: 550
Re: Stupid question
« Reply #2 on: April 16, 2019, 03:13:42 PM »

Tweaking, tuning, hardware improvements that might affect upstream.
I was under the impression you already had things pretty close to ideal considering your distance, hardware and technology available.

I am even wondering if it is possible to use 4G for upstream. I already have a USB 3G NIC, but despite the fact that the model of NIC is 3G only, a Huawei one, I know that 4G ones are available.
It is possible to load balance two seperate links with different external IP addresses for downstream, which can utilise both links if where you download from is multithreaded. But upstream... Hmm... You'd need some way to tunnel it because in my limited experience, it'll just pick whichever based on metrics. Though I have limited experience with balancing and bonding.

I don’t know if there is any annex M available ? Can I check ? Otherwise I will email sales at my ISP AA.
I'm not sure of where Annex M is mentioned on the new website, but they do sell it. I believe there is no charge to add it, but it costs £12pcm and £10 if you wish to remove it.

Problem is Annex M won't benefit you at all for upstream since you are just too far away from the exchange. You'll lose a sizeable chunk of your downstream in the process since the upstream band is expanded into the frequencies that are downstream for Annex A (here is a good image for example) that will be giving you the most downstream you are receiving now. Furthermore, not all modems play nice with Annex M anyway.
Logged
Zen Unlimited Fibre 4 G.fast (330/50 Mbps sync, 352/60 Mbps attainable) @ ~200m - Zyxel XMG3927-B50A
Three 4G LTE with Huawei B525 (peak down/up ~75/~27 Mbps, typ. 35-55 Mbps down)

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Stupid question
« Reply #3 on: April 16, 2019, 04:34:37 PM »

As for tuning, the speed testers may be indicating TCP payload throughput and if they are using time stamps and IPv6 then the figure that I get is pretty close to the optimum. If they are using TCP IPv4 and no time stamps then the figure I get seems to be not quite as good. The variation in upstream that I get with a half decent speed checker is sometimes zero and sometimes about 150 k.

There might be a little more that I could do here because currently I am running at 96% of the calculated maximum IP PDU rate if running flat out with only 1500 byte IP PDUs. As for how much if any of that 4% I can get back by tweaking, I have done loads of experiments but the variability in the speed testers’ results makes it very hard to see what is going on. In the end I settled on this 96% figure by looking at latency, and reduced the figure a bit to prevent worst-case latency from going through the roof - aimed to keep worst-case latency at ~250 ms.

I’m afraid I don’t understand your point about annex M. (I have seen that diagram before)
Logged

ejs

  • Kitizen
  • ****
  • Posts: 1949
Re: Stupid question
« Reply #4 on: April 16, 2019, 05:29:18 PM »

I thought Annex M was not available for long lines.

Quote
Annex M can only be used on loops which are classed as “short” or “medium” in length due to ANFP (Access Network Frequency Plan) restrictions. This broadly equates to lines with a 40dB or less line loss, an estimated 59% of all WBC lines
Logged

re0

  • Reg Member
  • ***
  • Posts: 550
Re: Stupid question
« Reply #5 on: April 16, 2019, 06:07:48 PM »

I’m afraid I don’t understand your point about annex M.

Basically, all I was trying to say was that Annex M would provide no benefit to upstream and would actually cause a loss of downstream if it were to be provisioned due to your distance. The diagram says it all, since with Annex M you've got the upstream band extended into what would be the strongest part of your downstream in Annex A which would leave you with extremely enemic bits on the downstream and half of the upstream band would be unutilised in your case.

ejs is right, Annex M not available for long lines. Even for medium length lines the improvement in upstream can be minimal, at the cost of some downstream and at a price which is not worth it for most. The cost aspect is even more true since 3G can match it, 4G and FTTC can beat it while being roughly the same cost if not cheaper.

I know you don't have FTTC available to you, but it's an example of value.
Logged
Zen Unlimited Fibre 4 G.fast (330/50 Mbps sync, 352/60 Mbps attainable) @ ~200m - Zyxel XMG3927-B50A
Three 4G LTE with Huawei B525 (peak down/up ~75/~27 Mbps, typ. 35-55 Mbps down)

aesmith

  • Reg Member
  • ***
  • Posts: 841
Re: Stupid question
« Reply #6 on: April 17, 2019, 09:16:00 AM »

Do you operate any sort of QoS or classification on the upstream?  Although that won't increase the actual bandwidth available, it may make it work more efficiently.   For example a problem that we face is that any sort of upload is liable to kill download performance as the TCP ACKs get lost in among the other upload traffic.  This can be resolved by a policy that prioritises (or reserves if that's the model your gear uses) small TCP packets.

Depends what you're trying to do with upload of course.  In our case pretty much all upload traffic is background rather than interactive, stuff like Google Drive sync or Icloud backup.
Logged

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Stupid question
« Reply #7 on: April 17, 2019, 07:29:49 PM »

@aesmith My Firebrick router’s are suspiciously lacking in QoS. However, they do have a fixed prioritisation for small packets, so this should, I hope, catch ACKs.
Logged

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Stupid question
« Reply #8 on: April 17, 2019, 11:45:54 PM »

I’m wondering if I should get another line installed and then cease line #3, as that is the one whose upstream SNRM flaps every day, good n hours bad m hours. Currently it is upstream-synced at 376kbps whereas the next slowest one is 499kbps both at 6dB actual upstream SNRM (and 6dB u/s targets).

Is there a way of sync rate capping upstream?

The other thing I am thinking about is trying different speed rate limiter values that are not constant fractions of the sync rate. Could there be any value at all in this?

What will it do to a remote-end TCP when it sniffs the timing responses that come from receiving traffic from a pipe built from uneven speeds in a bonded set? The downstream links happen to be very closely matched, but that one rogue line 3 upstream is 75% of the next faster one above.
Logged

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Stupid question
« Reply #9 on: April 18, 2019, 02:50:32 AM »

I tried some seemingly insane tweaking parameters which bizarrely were effective in improving the upstream result from the AA speedtester2.aa.net.uk. I have no idea what it is reporting, but if it the TCP payload for TCP+time stamps+IPv6 then it is doing about 101.5% of what I calculate it should be delivering at my chosen rates.

The bizarre settings I tried were 95% 95% 90% 95% for the modem loading factors for the four lines.

Each of those numbers is multiplied by the calculated protocol overhead for all protocol layers below IP PDUs, and then times the sync rate, so concerting IP PDU rate to sync rate.

This turned out to be faster than the current and longstanding all-96% rates, which scored 1.51 Mbps on the AA speedtester while the new lower numbers with an exception for the sluggish line scored 1.54 Mbps.

There was a lot of variation in these results and the results given are the very best values - of half a dozen runs or so. Sometimes the value reported would be very consistent, exactly the same figure reported from one test run to the next, sometime not. Enough runs, a very generous number, were carried out on each setting until I could get a feel that I was not going to top the high score attained for the current run set at that setting.

So gaining 30k by reducing the rates, how on earth does that work? That is 2% more AA speed obtained by turning the numbers down substantially.

And I made sure that this was not just a statistical aberration. If anything the all-96%, the previous orthodoxy, rarely hit 1.51Mbps, 1.45Mbps was a common result, so the reported 1.51Mbps high-score figure was comfortably at the top end of the distribution of results.
Logged

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Stupid question
« Reply #10 on: April 19, 2019, 03:53:43 AM »

I made some actual measurements using the rate counters available from AA’s clueless.aa.net.uk server. It gives upstream/downstream rate samples and min/avg/max latency figures based on timing the PPP LCP echo request (‘ping’-like) CQM test messages that it sends every second or so.

               Firebrick                                    Individual tx measured         
   Sync   Load   Limitr   measure   meas/lim   1-m/l         mtx1    mtx2    mtx3     mtx4    mtx5
1   528   0.95   443.632   437.200   0.985501   0.014499      437.0   436.7   437.9      
2   519   0.95   436.070   425.633   0.976067   0.023933      424.7   425.9   426.3      
3   376   0.90   299.292   296.900   0.992008   0.007992      296.5   297.8   295.8   296.5   297.9
4   499   0.95   419.265   409.933   0.977743   0.022257      408.9   411.2   409.7
   


The ‘measure’ column is the measured upstream rate for each line and the ‘Limitr’ column is the intended rates, the rates that are supposed to be imposed by the Firebrick’s upstream rate limiters. All figures are kbps. The rightmost columns are individual speed tests and these were averaged to give the ‘measure’ column (arithmetic mean).

This is an idea that I had which has turned out to be very successful, using a much lower modem loading factor on the slowest line. Note here that line 3 is 90% whereas all the others are 95%. In various tests, this arrangement proved to be superior to all-equal modem loading factors, bearing say the all-96% arrangement.

The results:

* You can see that the true ‘measure’ rates are a bit off, but not hugely. The ‘measure/lim’ column shows the ratio of the two and there is 1-the_aforementioned as a mental aid.

I should perhaps do this again using max() instead of averaging the data points, that would give an idea of the links’ true capacities with the effect of limiters included too if they are kicking in. I thought that the effect of limiters might be to have some overshoot and hunting so that’s why I averaged the data points, because I wanted to see what effective rate the limiters were really trying to find and what the real long term data rate was. However I see now that both would be good.

* I can’t see much of a pattern in the aberrations in ‘measure/lim’. Maybe some are a bit low because I am running the modems ‘too hot’ and the modems themselves are limiting throughput not just the Firebrick.

* I can’t see a lot to be gained here by tweaking loading factors, the percentages. I calculate there is only perhaps ~50kbps more to be had from that type of twiddling,  if very very lucky. But I wouldn’t know how to proceed anyway.

It took about 15 mins to do a backup of my iPad Pro to the Apple iCloud network file system yesterday morning. Felt like forever. Uploading pictures is a nightmare.

* It means anyway that the load splitting thing in the Firebrick really does work and throughput is good.

* I still don’t know what the modem load factors should all be exactly, in that I haven’t say raised them all a bit to try and reclaim some of the last remaining speed, if there even is any to be had, and done this kind of true throughout measurement. I don’t know if 100% or 99% actually doesn’t work at all, or does work but is over the top so that 100% is no faster than say 98%.

* Those numbers are based on calculation of protocol overheads and there may be other real-world timing overheads as opposed to bits bloat overheads, which I don’t know about. I don’t know about the effect of PhyR and what if there is a variable slowdown if it is having to work harder or less hard? A high error rate means so many DTUs get L2-retransmitted n [?] times. I haven’t accounted for that and if it’s variable then I would just have to try and guess. Maybe input buffering in the modem would just handle that anyway and I shouldn’t care about it. One other bad thing is the assumption used in the protocol overhead fraction calculation (to convert sync rate into IP PDU max theoretical rate) of a scenario with a certain size of packet being sent. I chose the max size IP PDU, 1500 bytes in my case, to give the most optimistic and favourable rate. Although that is bad, I have the modem loading factor to bring it down to reality as needed.

However short packets are much less efficient because of the huge overhead - 32 bytes in my case, which is for PPPoEoA, ATM etc. Also, since I am unlucky enough to be using ATM, there is the wobble in efficiency created by the 0-47 bytes unknown additional bloat from ATM cells. If the idiots who ordered the DSLAMs for G.992.3/992.5 had only insisted on PTM support as well, then I could have  used that instead of ATM getting me 10% more speed straight away (minus the small overheads OF PTM). <dream>[And I’ll have a large fries, SRA and G.INP with my PTM too. Ah to hell with it, I’ll ‘go large’ too and have Annex I with that lot as well.]</dream> If there were an application that sent a lot of short packets back-to-back flat out then there would be potential trouble because my numbers would be miles out and a modem loading factor of 95% would not be low enough when protocol overheads could be 50% [!] for very short packets.

I don’t know enough about this, but a rate limiter could have parametrisation to allow it to convert for protocol overheads in a complex way if there were enough parameters.
« Last Edit: April 20, 2019, 01:23:14 AM by Weaver »
Logged

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Stupid question
« Reply #11 on: April 20, 2019, 11:21:34 PM »

I did some more tests, which merely showed that the previous results were repeatable.

I have still not yet been able to make out any pattern in the differences between rate limiter values and the measured rates. Can anyone else see anything?
Logged

burakkucat

  • Global Moderator
  • Senior Kitizen
  • *
  • Posts: 26396
  • Over the Rainbow Bridge
    • The ELRepo Project
Re: Stupid question
« Reply #12 on: April 20, 2019, 11:30:09 PM »

I did some more tests, which merely showed that the previous results were repeatable.

That is good news, as it shows you are not just chasing shadows.

Quote
I have still not yet been able to make out any pattern in the differences between rate limiter values and the measured rates. Can anyone else see anything?

I am rather puzzled. Nothing seems obvious, to me.

Would plotting the results graphically show up a trend, maybe?
Logged
:cat:  100% Linux and, previously, Unix. Co-founder of the ELRepo Project.

Please consider making a donation to support the running of this site.

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 7262
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Stupid question
« Reply #13 on: April 21, 2019, 02:16:46 AM »

@burrakucat - I would need to get a lot more data points. Maybe from varying the rate limiter values across a wider range and using additional patterns for the differences between links rates. I have only looked at two patterns in detail: (i) all rates equal and (ii) slowest link=lower rate.

I must not get sucked into obsessing over nonsense, as calculations lead me to believe that there is not that much additional speed left there to be gained, as I mentioned earlier.

But I would like to understand how best to determine the rate limiter values. Line 2 is 2.4% low, which is quite a significant discrepancy especially seeing as the values do not have to be so far ‘off’, looking at line 3 for example, which is only 0.8% low.

We recall that the 100% base value (ie modem loading factor=100%) is merely based on calculated protocol inefficiency below sync rate and does not know about any other possible real-world factors that slow things down additionally in practice. So it is hardly surprising if this basis rate turns out to be a bit unrealistically high. A candidate for an alternative basis rate might be -0.8%.

Why is line 2 so low? I wonder if it is perhaps struggling a bit more than the others? If I look into the detailed stats for it right now, I wonder what I might find. There is no upstream PhyR, so
I believe, so that is out as a possible contributor.

I think a significant test then would be to resync line 2 and see if the numbers change, ideally it would resync at the same rate, otherwise we are changing other unwanted parameters. The ideal would be to get a resync to sort out any problems that link 2 might be having (and give it a better bit loading or better set of framing parameters, for example) but do nothing else. If the difference between real world performance and rate limiter calculated assigned value then does decrease, then we maybe have the right idea.
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 5719
Re: Stupid question
« Reply #14 on: April 21, 2019, 09:08:39 AM »

The increase in speed (Assuming its consistent), is likely related to retransmits and congestion control.  If TCP is too aggressive it can hurt performance.

Perhaps not related to your question but over the years we have had more aggressive TCP algorithms coded, which will help a lot on long distance high latency downloading e.g. from USA to UK or ASIA to UK, but now regional CDN networks are widespread so if you downloading from a mainstream service its pretty likely its coming from inside the UK anyway, so then you have an aggressive TCP algorithm running over low latency which in my opinion is too aggressive especially when as a bonus on top you throw in many threads.

Also I know from my days when running high bandwidth linux servers over gigabit ports (before moving to 10gig), I used to do port bonding (like your line bonding) and had to fight packet ordering issues, on linux you have a choice between different bond algorithms, the one I used actually will put effort into maintaining proper order of packets but I found if the ports got saturated it wouldn't work properly which could cause performance dips.
« Last Edit: April 21, 2019, 09:15:07 AM by Chrysalis »
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab - LINE STATISTICS CLICK HERE
Pages: [1] 2