Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: 1 2 3 [4] 5

Author Topic: Is it possible to force on interleaving (upstream) on a VDSL modem in the UK?  (Read 7082 times)

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5289
    • Thinkbroadband Quality Monitors

As I understand it they try to avoid interleaving on upstream as some modems are buggy and don't like it.
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

cbdeakin

  • Reg Member
  • ***
  • Posts: 101

It's a shame Plusnet isn't offering to let people trial G.INP anymore on ECI cabinets (this was part of an Openreach trial).

EDIT - Actually, I think G.INP never worked on the upstream on this equipment anyway  ::)
« Last Edit: April 06, 2022, 12:59:03 AM by cbdeakin »
Logged
Aluminium line. ECI cabinet. Fun times.

tubaman

  • Senior Kitizen
  • ******
  • Posts: 12674

It's a shame Plusnet isn't offering to let people trial G.INP anymore on ECI cabinets (this was part of an Openreach trial).

EDIT - Actually, I think G.INP never worked on the upstream on this equipment anyway  ::)

Even on Huawei cabinets it is quite unusual to see G.INP on the upstream side. I've seen it twice I believe on my connection, following high error rates, and even then it didn't last more than a day or so before DLM removed it again.
Logged
BT FTTC 55/10 Huawei Cab - Zyxel VMG8924-B10A

g3uiss

  • Kitizen
  • ****
  • Posts: 1151
  • You never too old to learn but soon I may be
    • Midas Solutions

It's a shame Plusnet isn't offering to let people trial G.INP anymore on ECI cabinets (this was part of an Openreach trial).

EDIT - Actually, I think G.INP never worked on the upstream on this equipment anyway  ::)

Of course it’s not Plusnet that stopped G.INP, OR stopped the trial as it didn’t work in all scenarios. G.INP has never been successfully implemented on ECI cabs despite attempts to make it so.
Logged
Cerebus FTTP 500/70 Draytec 2927 VOXI 4G fallback.

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7411
  • VM Gig1 - AAISP CF

Indeed. The killer is when aluminium and copper are jointed - it creates an impedance mismatch and you end up with literal signal reflections increasing noise.

Would you think it possible to create enoigh noise to interleave a line?

My own line was repaired after it had a very low sync (documented here on kitz), and not that long after, there was sudden noise that appeared.  What I do know about the line that approx the last 50m is ali, the rest before it is copper.  The engineer on the fault had to redo joints.  The new issue has never been reported of course as I dont think it falls outside of openreach specification. (lost some sync speed but still 25mbit above handover, and stability is fine).

Also talking about odd DLM profiles, if people are willing to search, back in the days I was on plusnet I had a fault which had me syncing very low and the service test itself failed on the basis of a sudden loss of speed exceeding the threshold, well the DLM ended up configuring FEC on the line but without interleaving delay, I have never seen even a single other VDSL line configured in that way.  If I remember right that configuration remained until the fault was fixed by the speed boost engineer who also did a DLM reset (my install engineer) and I have never seen it since.
« Last Edit: April 08, 2022, 03:18:22 AM by Chrysalis »
Logged

cbdeakin

  • Reg Member
  • ***
  • Posts: 101

So, I never did manage to switch to Plusnet (I was going to choose a more stable DLM profile setting), due to the online order process failing several times.

Anyway, it looks like it won't be necessary now :)

I think I may have stumbled on some router SQM settings (I use OpenWrt on a Raspberry Pi 4) that appear to prevent packet loss on my FTTC line, here's the results in Google Stadia after a 20 min session:

https://i.imgur.com/lKlqixt.jpg

Here are the settings I'm currently using:

https://i.imgur.com/WbqmJbw.jpg

The Ingress setting above is particularly important it seems. I'm guessing it basically increases the maximum amount of downstream packets allowed per second, or maybe it's related to the number of traffic flows allowed.

and this:

https://i.imgur.com/UM0jKCK.jpg

I need to test this option more, but I think a value of around 256 (highest possible setting) is necessary for maximum reliability in this case. Increasing this setting reduces bandwidth a bit, and increases latency sometimes, but in my view, it's well worth it.

This was tested with the FQ_Codel que discipline, which gives better results than other options on my line.
« Last Edit: April 25, 2022, 03:22:47 AM by cbdeakin »
Logged
Aluminium line. ECI cabinet. Fun times.

cbdeakin

  • Reg Member
  • ***
  • Posts: 101

Do we have any OpenWRT experts on these forums that can explain what inputting "flows 100000" for the ingress and egress (for the settings shown above) actually does?
Logged
Aluminium line. ECI cabinet. Fun times.

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5289
    • Thinkbroadband Quality Monitors

I certainly can't make head nor tail of it, SQM is supposed to be about sharing bandwidth fairly and keeping latency low.  If its changing packet loss, that has to be a side-effect somehow.

Quote
CAKE prevents the queue building up and ensures fair access by using a variation of codel to control delay (latency) on individual flows.

I'm not exactly sure what a "flow" is or if you are increasing or decreasing it from default.  I honestly wouldn't expect it to make any difference unless the link is maxed out.

Pet Packet Overhead AFAIK is to do with calculating the REAL bandwidth vs your link rate.  Not sure if increasing it is really any different to just setting egress and ingree values lower.

I wonder if its doing something with the MTU causing packets to be smaller?
« Last Edit: April 25, 2022, 11:01:50 PM by Alex Atkin UK »
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

cbdeakin

  • Reg Member
  • ***
  • Posts: 101

That was a good tip about the Link Layer Adaption. I've set it to 0 now, and the downstream bandwidth to 39000 kbps (down from 43000kbps).

The latency appears to be steady now :)
Logged
Aluminium line. ECI cabinet. Fun times.

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7411
  • VM Gig1 - AAISP CF

Do we have any OpenWRT experts on these forums that can explain what inputting "flows 100000" for the ingress and egress (for the settings shown above) actually does?

Its a limit for concurrent flows.  What happens when its reached though no idea.  But that will be plenty for a consumer connection.
Logged

cbdeakin

  • Reg Member
  • ***
  • Posts: 101

The only drawback seems to be increased latency, which I presume would only get worse if I added another 0.

I haven't noticed increased latency during a stream with my current settings, only on Waveform's bufferbloat test, for the upstream only (300ms under load!).

Reducing both settings to 'flows 10000' results in around 0.7% packet loss on Mlab's speed test. When I tested it in stadia with this amount, it was a similar story.

« Last Edit: April 26, 2022, 03:56:01 PM by cbdeakin »
Logged
Aluminium line. ECI cabinet. Fun times.

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5289
    • Thinkbroadband Quality Monitors

Its a limit for concurrent flows.  What happens when its reached though no idea.  But that will be plenty for a consumer connection.

I was thinking more what the definition of a flow is in this context is.  I was wondering if perhaps the more flows allowed the smaller chunks its splitting the data into, thus its sending smaller packets which is somehow mitigating the packet loss?  Seems kinda unlikely though as the traffic were talking about is incoming, unless this configuration is somehow triggering the other end to use smaller packets, I don't see how its having an impact on a connection that isn't heavily loaded.
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

cbdeakin

  • Reg Member
  • ***
  • Posts: 101

This is what shows up in OpenWRT's log if I just input '100000'

user.notice SQM: ERROR: cmd_wrapper: tc: FAILURE (1): /sbin/tc qdisc add dev ifb4pppoe-WAN parent 1: handle 110: fq_codel limit 1001 target 5000us interval 100000us noecn flows 1024 100000

user.notice SQM: ERROR: cmd_wrapper: tc: LAST ERROR: What is "100000"? Usage: ... fq_codel   [ limit PACKETS ] [ flows NUMBER ] [ memory_limit BYTES ] [ target TIME ] [ interval TIME ] [ quantum BYTES ] [ [no]ecn ] [ ce_threshold TIME ] [ drop_batch SIZE ]
« Last Edit: April 26, 2022, 04:21:49 PM by cbdeakin »
Logged
Aluminium line. ECI cabinet. Fun times.

cbdeakin

  • Reg Member
  • ***
  • Posts: 101

So, the settings I'm using so far are FQ_Codel for the Que discipline.

ECN on for both upstream and downstream. Needed to deal with network congestion, especially on the upstream.

DSCP packet options enabled.

'Advanced option string to pass to the ingress queueing disciplines' = 'flows 100000000'

With the egress equivalent option left blank.

Per Packet Overhead = 0

It's working really well, no hiccups. I don't know yet if there will be any negative side effects to setting the ingress queuing to such a high figure.
« Last Edit: April 28, 2022, 01:11:46 PM by cbdeakin »
Logged
Aluminium line. ECI cabinet. Fun times.

cbdeakin

  • Reg Member
  • ***
  • Posts: 101

Has anyone else tried to use OpenWRT (SQM in particular) to resolve/avoid packet loss issues?

I'm guessing this would probably work on other routers with SQM also, but maybe not quite so well.
Logged
Aluminium line. ECI cabinet. Fun times.
Pages: 1 2 3 [4] 5