Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Author Topic: CRC errors and "stability levels"  (Read 2320 times)

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
CRC errors and "stability levels"
« on: September 04, 2021, 06:07:46 PM »

I just read Kitz’s piece on BT DLM "stability level" config options. Very useful. It is indeed confusing all the variation in naming for these and KITZ did a good job clearing all this up.

In earlier posts I have been moaning about the sporadic problems I have had with Zoom. I assume that now I’m using Zoom, I can not afford to have any CRC errors / ES at all. I mentioned earlier that my line 1 has been showing a modest number of ES, the other two lines are perfect, even on 3dB SNRM. To fix line 1 I had to raise the SNRM targets from 3 dB down / 6 dB up to 6 / 9 dB, but as I mentioned earlier for some bizarre reason, after a few days the settings revert back to 3 / 6 dB. I set these SNRM values in AA’s control server clueless.aa.net.uk and when they drop back the server still show the expected values (I think), not the true current ones - need to recheck that though. I asked AA why this strange reversion thing was happening, but they didn’t know.

I now wonder if this could be due to interference by DLM. Is that possible? We’re used to DLM interfering to slow things down, but does it do the reverse, interfere to speed things up because the error rate is too low? Of course after a problem, when SNRM has been increased by DLM, DLM will improve speed later on after a sufficient period of time has elapsed with no errors, we’re all familiar with that, but does DLM not know that it is doing that as a response to an earlier problem, ie. statefully rather than statelessly? Stateless design is cheaper/easier and less prone to bugs, so who knows? Actually Kitz knows, if anyone does.

I should have talked to AA about this, but it’s the weekend and I made a stupid mistake. In clueless there are two buttons, marked "Extra Stable" and "Super Stable". There is no documentation about these, nothing in the help pages. I should have asked AA exactly what they do. But anyway, stupidly, I hit the Extra button. I guessed that these might be something to do with DLM and also that "Extra" was in some way "less" than “Super". I had not yet read Kitz’ article. The sim was to reduce the level of activity of DLM and to make it aim for higher reliability, lower speed.

There is no "Normal" button, or "Standard" or "Speed" whatever it might be titled. I assume that appears when you are in one of the non-standard states. Clueless said it was putting in a change order with BT. I wasn’t expecting that. By the look of it this takes time. All the clueless buttons have vanished and it indicates that it is in a "changing option" state. The clueless log says ServOpt -2. This makes me think of option number 2 in Kitz’ table. Kitz remarks that AA uses "speed" DLM profile by default, option 1, so this would all fit. Surely it can’t require a human with a spanner to make this happen? Is the ‘order’ and associated delay just because some service process only runs in the background once every so often? (Or maybe it has to synchronise with the timing of the internal operation of the boxes that go to make up DLM.)

Anyway, it was really stupid of me not to ask. Just have to wait now and see what the behaviour is. If all this theorising turns out to be correct, then this will be a good tip for AA Zoom users, a tip to go into the help wiki.
« Last Edit: September 05, 2021, 12:16:07 PM by Weaver »
Logged

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5260
    • Thinkbroadband Quality Monitors
Re: CRC errors and "stability levels"
« Reply #1 on: September 05, 2021, 11:51:53 PM »

Its interesting you mention that as I found game streaming the majority of the time completely useless on both my VDSL lines, when for literally everything else they are rock-solid stable.

I'm not just talking blocking or skipping from missing data, sometimes it would outright disconnect.

So you could be right that any errors are problematic with live streams and more so when you have such limited bandwidth as you do.

I wonder though if bonding might make it worse?
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

aesmith

  • Kitizen
  • ****
  • Posts: 1216
Re: CRC errors and "stability levels"
« Reply #2 on: September 09, 2021, 03:45:05 PM »

Streaming stuff should actually be more tolerant of errors in a way.  This is because it doesn't retransmit or retry data, so if you have a burst of errors you just get a brief interval of poor quality.  The term I use for this sort of data is "better never than late" because latency and jitter are actually worse then drops.  Nowadays I normally do any Webex stuff on our 4G.  However it works perfectly well on our ADSL within the limits of the uplink speed, and seems to be unaffected by the almost constant 60-70 CRCs per minute.  In context that's no more then one packet in 350 being affected.
Logged

Weaver

  • Senior Kitizen
  • ******
  • Posts: 11459
  • Retd s/w dev; A&A; 4x7km ADSL2 lines; Firebrick
Re: CRC errors and "stability levels"
« Reply #3 on: September 10, 2021, 02:13:34 AM »

> better never than late" because latency and jitter are actually worse then drops.

Absolutely, indeed. Real-time application protocols have to be designed to have a high level of error correction without retransmissions or else minimise the use of retransmissions to be within strictly time-bound limits. Sometimes data that is way too late is useless by the very nature of the application, for example, sending the current time in a real-time clock application is of no use if it takes nearly a second to get each clock-update message to the destination and sending such data late makes no sense. Designing applications that can cope with such data loss is either difficult or truly impossible.

> I wonder though if bonding might make it worse?
No I don’t think so. The increased jitter might create certain types of problems for application protocols. Increased jitter would of course only be the case when two links have different latencies, and in an extreme case where PDUs could arrive out-of-sequence, which would be  more likely if retransmissions are available and are used to differing degrees on the multiple links. But without increased jitter, I don’t see how bonding itself can possibly cause any problems, but if the conditions are right then bonding does have an association with such errors.
« Last Edit: September 10, 2021, 02:28:07 AM by Weaver »
Logged