Kitz Forum

Broadband Related => Broadband Technology => Topic started by: GaryW on December 18, 2017, 05:51:51 PM

Title: VDSL - tones used in each band & why
Post by: GaryW on December 18, 2017, 05:51:51 PM
Hi all,

A little background may explain why I'm asking this question  :)

I'm on VDSL on a fairly long line.  In the past, my downstream sync speed varied seemingly at random between around 8000kbps and 11400kpbs (and between low- and high-interleaving).  When I finally started to read up on how VDSL works, I realised that I was getting bursts of huge numbers of ES so set to work removing any sources of noise under my control.  After binning my Powerline adapters, replacing fluorescent strip lights with LEDs, fitting Delta Suppression Filters to the CH boiler and motorised valves, fitting a mk4 faceplate, running the shortest vdsl cable possible, etc I had virtually no ES (typically 30ish per day).  That was a little over 2 months ago and DLM as been slowly relenting - the first step up to 13096 kbps being after one month, and the second step up to 14999kbps after a further month (on 5th Dec).  And I'm still getting less than 100 ES on a typical day, and less than 200ES on the occasional bad day.  (I'm assuming I'm currently banded at 15M).

My question relates to how/why VDSL decides to use a subset of the tones in a band rather than the full set of tones.  Does it use all of the tones that it possibly can on your line and spread the bits across all those tones...which would potentially result in a very high SNRM?  Or does it use just enough tones to allow enough bits to be loaded to hit the DLM banding (with a bit of wriggle-room for bit-swapping) whilst maintaining a good-enough SNRM?

Here's what I see from pbParams:

Discovery Phase (Initial) Band Plan
US: (6,31) (882,1193) (1984,2770)
DS: (33,857) (1218,1959) (2795,4083)
Medley Phase (Final) Band Plan
US: (6,31)
DS: (67,785)

i.e. it's not using the tones in DS1 from 33-66 or from 786-857.  Based on QLN, HLog and SNR/tone (the latter obviously for the tones it is using) it looks like most, if not all, of the missing tones should be usable...but they're not being used.  [ You can see my stats on MDWS under GaryW ].

Hope that all makes sense!

Cheers,
Gary
Title: Re: VDSL - tones used in each band & why
Post by: niemand on December 20, 2017, 04:03:41 PM
Regarding number of tones in use the more tones loaded the lower the transmit power per tone. Lower power = lower SNR so spreading across every tone isn't always worthwhile, it can be better to load up a smaller number of them.

I'm not aware if there's a standard governing this. Modems vary in their behaviour depending on both hardware and firmware revisions.
Title: Re: VDSL - tones used in each band & why
Post by: GaryW on December 20, 2017, 10:45:33 PM
Interesting - so it's the modem that makes the decision rather than the DSLAM?
Title: Re: VDSL - tones used in each band & why
Post by: licquorice on December 21, 2017, 09:25:51 AM
Presumably the modem in the DS direction and the DSLAM in the upstream direction as they would be in the relative postions to analyse the incoming signal to make the decision.
Title: Re: VDSL - tones used in each band & why
Post by: Ixel on December 21, 2017, 10:18:39 AM
Presumably the modem in the DS direction and the DSLAM in the upstream direction as they would be in the relative postions to analyse the incoming signal to make the decision.

This I believe is also true.

I say this because when I used the DSL-AC68U via TC Console I was able to fully control the downstream side of things, such as the max sync rate (I could go faster than the DSLAM would allow for example), the target SNRM, INP and delay parameters. I think for the tones the modem can override what the DSLAM says however, at least on the ASUS I could control both the min and max downstream and upstream tone range allowed and it worked when I did that. If the modem says it can't support X to Y tones then the DSLAM should surely co-operate with that. As for the upstream max sync rate, target SNRM, INP and delay I think the DSLAM has full control of that and the modem can't have a say in that. The only exception to upstream is banding to a lower sync rate which modems on a Broadcom chipset are capable of.
Title: Re: VDSL - tones used in each band & why
Post by: kitz on December 22, 2017, 11:40:03 AM
Quote
My question relates to how/why VDSL decides to use a subset of the tones in a band rather than the full set of tones.  Does it use all of the tones that it possibly can on your line and spread the bits across all those tones...which would potentially result in a very high SNRM?  Or does it use just enough tones to allow enough bits to be loaded to hit the DLM banding (with a bit of wriggle-room for bit-swapping) whilst maintaining a good-enough SNRM?


It uses all of the tones it can across each channel.   Certain tones will be subject to PSD masking and Power Cut Back and thus lower [true] SNR on those particular tones.   

DSL uses the waterfill method to fill up the tones in a sub-channel.  eg every bin in the channel range with sufficient SNRm* will get 1 bit.  It then goes across the sub-channel again, and those with sufficient SNRm will get 2 bits...  and so on.
When sufficient bits to meet the maximum speed have been loaded, it just stops filling any more bins.   
This method allows fairer distribution of bits to each bin... by ensuring that the lower frequency tones dont fill completely whilst having higher frequencies without any bits loaded despite the fact they could be perfectly capable of transmitting data.   


The waterfill algorithm is a function of the modem.  The modem is responsible for maintaining the Bit Allocation Table and also Bit Swap.
PSD masks are set at the DSLAM.


-----
Although I've said SNRm when talking about water-filling this is just for sake of simplicity, there's an allowance for something called Bit Error Rate (http://www.kitz.co.uk/adsl/adsl_technology.htm#ber).  The BER rate is factored into when calculating SNRM.
Title: Re: VDSL - tones used in each band & why
Post by: kitz on December 22, 2017, 11:57:52 AM
ETA.   There's a white paper here.   I haven't read this particular paper and don't have time to atm, but from a brief scan it discusses the benefits of various water-fill algorithms to help combat FEXT.  My previous post was based on stuff I learnt long ago about DSL and why I said it was a bit more complex than just SNRM.

http://flylib.com/books/en/3.206.1.99/1/

On thing I have noticed is that on shorter lines, the Openreach DSLAMs appear to give priority to filling U2 before U1 - regardless of what PCB masks are set on U0 & U1.  By ensuring the fill as much you can in the higher frequencies, this goes towards helping ensure the lower tones never get crowded out.

Title: Re: VDSL - tones used in each band & why
Post by: Chrysalis on December 22, 2017, 05:35:11 PM
interesting kitz thanks, to me waterfill seems bad as higher tones are more likely to generate ES. i have never heard of bitloading by itself been a cause of crosstalk but if it was combined with lowereing transmit power on the lower tones then it would work, when back on pc ill have a read.
Title: Re: VDSL - tones used in each band & why
Post by: kitz on December 22, 2017, 08:15:10 PM
DSL in the UK has always used the water-fill method - well at least since MaxDSL & ADSL2(+) was introduced. 

Still not read that paper.. but the FeXT theory is logical to me. 
Especially when combined with dynamic power management, which as you recently found out Openreach are using with at least some cabs...  and I totally believe they are.  BT invested in new RAMBO boxes last year and it makes perfect sense they would be doing the calculations for DSM.

Filling the higher frequency tones as much as possible means that they can reduce power at the lower frequencies for short lines.   Its the reduction of power which reduces x-talk for any longer lines on the same cab.

Result - short lines shouldn't see any difference. Its not much difference to traditional PSD masking,  just more relevant to the individual line. There will still be the inbuilt BER allowance of 10-7.
Longer will lines benefit and should get a more bit load per bin at those lower frequencies... thus better sync speed.

I suppose what could really muck it up is if you had a modem which could over-ride any dynamic spectral management.
Title: Re: VDSL - tones used in each band & why
Post by: burakkucat on December 22, 2017, 08:58:06 PM
I suppose what could really muck it up is if you had a modem which could over-ride any dynamic spectral management.

For some reason, I wanted to type the name "Asus".  ::)
Title: Re: VDSL - tones used in each band & why
Post by: kitz on December 22, 2017, 09:03:55 PM
i.e. it's not using the tones in DS1 from 33-66 or from 786-857.  Based on QLN, HLog and SNR/tone (the latter obviously for the tones it is using) it looks like most, if not all, of the missing tones should be usable...but they're not being used.  [ You can see my stats on MDWS under GaryW ].


Just noticed this. Not looked at your stats.. but aren't those particular tones the usual guard bands or at least near to?  DSL uses stop bands between the different sub-channels.   Its not unusual for the DSLAM to add guard bands either side of the stop band.   Different DSLAM manufacturers may use slightly different guard and band plans.

Whilst the defined stop band between U0 and D1 may be tone 32,  all DSLAMs add further guard bands to either side of the stop band to prevent cross-talk between up and downstream.   
Title: Re: VDSL - tones used in each band & why
Post by: kitz on December 22, 2017, 09:20:14 PM
For some reason, I wanted to type the name "Asus".  ::)

Yes me too.   There's something that's been puzzling me for ages how those modems could over-ride some of the standard DSLAM settings such as Target SNRm when in theory it shouldn't be able to.   There's 2 types of DLM methods - can't recall the names of them without looking it up and Im too tired and weary today and got other things I should be doing, so I know I wont take in any new info.. which is also why I won't read that white-paper today. 

But back to the 2 different types of DLM management and the one that supposedly doesn't use a target SNRm...  how can it over-ride.
Then I got to thinking when typing my earlier post..  yeah the modem does do the bit allocation and responsible for bit allocation algorithms. That bit is down to the modem manufacturer which algorithm they use.   
Now there is something the modem could do to perhaps over-ride the system.  What if it messed with the BER/bit allocation, could it have a side effect on the SNRM?    I don't know enough about the modem for it to give me much more than a pause for thought and make me go hmmm.

As I don't know much about the modem aspect when it comes to manufacturers specifications etc -  ejs is more into that and he may know a bit more.

   
Title: Re: VDSL - tones used in each band & why
Post by: burakkucat on December 22, 2017, 09:33:58 PM
But back to the 2 different types of DLM management and the one that supposedly doesnt use a target SNRm...  how can it over-ride.
Then I got to thinking when typing my earlier post..  yeah the modem does do the bit allocation and responsible for bit allocation algorithms. That bit is down to the modem manufacturer which algorithm they use.   
Now there is something the modem could do to perhaps over-ride the system.  What if it messed with the BER could it have a side effect on the SNRM?    I don't know enough about the modems to do anything other than it give me pause for thought.

Hmm . . . that an interesting thought which appears to be quite plausible.
Title: Re: VDSL - tones used in each band & why
Post by: j0hn on December 23, 2017, 01:03:35 AM
You can choose to ignore specific tones with said Asus devices. There's not much you can't override.
Title: Re: VDSL - tones used in each band & why
Post by: kitz on December 23, 2017, 02:23:42 AM
Quote
to me waterfill seems bad as higher tones are more likely to generate ES.

I was thinking about this some more and that risk is minimal.   In fact it's totally negligible on any line running at full capacity.   Water-fill method only really affects the shortest of lines capable of getting full sync, by forcing more bits to be loaded at the higher end of the spectrum.

One of the other methods of bit load is the waterfall method, which is fill up first then move to next tone.   If you had a 40/10 provisioned line that was next to the cab, using the waterfall method, this line would totally fill all 15 bits at the lower end of the spectrum and never use any of the higher frequencies.  The water-fill method forces the line to use higher frequencies across the full spectrum range. 

Trying to think of an easy way to explain the two types.   Imagine all the individual bins are test tubes in a row, the test tubes are various lengths depending upon the SNRM.   Waterfall fills the first test tube with a hose pipe and the water then overflows to the next test tube until youve either used 80 pints of water, or filled each tube as much capacity as it can take.
Water-fill is a bottom up approach but based on the fact that water always finds its own level and you're simultaneously pumping 80 pints of water into each of the test tube from a hole in the bottom.. until the tube is full or the water runs out.


As mentioned above, UK DSL has been using the water-fill method since at least rate adaptive DSL (maxdsl and adsl2(+) with BE* & UKOnline).
Here's a screen cap of my maxDSL line syncing at 8Mbps quite clearing showing the water-fill method is in use, otherwise it would easily fill at 15 bits over the lower spectrum leaving nothing loaded in the higher frequencies.
(http://www.kitz.co.uk/routers/images/DMTv7_channels.png)

Water-filling is much more efficient especially when you have multiple line lengths on the DSLAM and when used in conjunction with power management and/or masks allows more capacity for longer lines.   
If you are going one step further and using DSM, you greatly reduce the risk of cross-talk on longer lines.   
Whilst its good, it would be even better if they also used vectoring.  ;)

I'm not sure if the preference of loading U2 over U1 on shorter lines is a BT thing or global.. it could be more to do with Upstream PBO in U1 rather than a direct result of water-filling.  I only mentioned it because its considered best practice to force capable lines to use the higher tones.
Title: Re: VDSL - tones used in each band & why
Post by: kitz on December 23, 2017, 02:37:42 AM
You can choose to ignore specific tones with said Asus devices. There's not much you can't override.

Which unfortunately then degrades neighbouring lines.    >:(
There's a damn good reason why things like spectral shaping & power management is so important. Its like someone turning up their radio, so the neighbour cant hear their own.   

In a way I can perhaps understand people who know what they are doing wanting to tweak target SNRM if the line is stable. Nor can I see too much harm in being able to ignore a specific tone - again as long as the EU knows what they are doing - as there may be cases where that could be useful if you live in an area that's getting RFI at a particular frequency.   
But I don't think being able to mess with power or changing band plans on EU whim is a good idea at all.  :no:
Title: Re: VDSL - tones used in each band & why
Post by: Chrysalis on December 23, 2017, 06:45:23 AM
yeah I know its always been used, I remember in the adsl max days short lines not fully loading up the lower tones :), and yes the waterfill vs waterfall would probably not have much different results on lines unable to reach full sync, the increased ES would be just an impact on lines hitting the sync cap with a lot of spare margin.  But those lines would also be the most likely to be stable anyway so probably considered ok for them to run non optimally to help weaker lines.

I will read it when I get the time and motivation. :)

Now there is something the modem could do to perhaps over-ride the system.  What if it messed with the BER/bit allocation, could it have a side effect on the SNRM?    I don't know enough about the modem for it to give me much more than a pause for thought and make me go hmmm.

As I don't know much about the modem aspect when it comes to manufacturers specifications etc -  ejs is more into that and he may know a bit more.

   

I think it works something like this kitz.

The modem at the receiving end controls the actual bit allocation target SNRM etc. however the DSLAM can issue a request (instructions) on how the modem should behave, I expect in recent years modem vendors have (mostly) all cooperated and made the modem simply honour what the DSLAM requests, but for whatever reason the ASUS chipset vendor decided to not do this.  Of course the likes of broadcom make chipsets for both ends of the connection so its easier for them to make sure it all works in tandem.
Title: Re: VDSL - tones used in each band & why
Post by: kitz on December 28, 2017, 08:51:24 PM
As mentioned in another post, I started to respond to this a few days ago and had to stop.  I was going to attempt to look up some links to post but never got that far...  so rather than completely abandon the post, I'll just paste where I'd got up to before xmas, so here it is warts and all

----

Quote
I think it works something like this kitz.


Yes I know that bit and standard DMT stuff which I'm au-fait with..  but I was referring to the more complex workings of PHY management, which is something seldom ever discussed, but is essential for DLM (& DSM) to work.   PHY is part of the modem firmware and thus something I don't know too much about, but I think ejs dabbles in this area when hacking firmware, which is why I said he may know more about what ASUS have done.

The ATU-R expects to receive certain configuration parameters from the DSLAM prior to initialisation be able to achieve a successful sync.
It's not anything to do with specific (BCM) chipsets, but is a defined ITU-T standard in its own right.
G.997.1 (Physical layer management for digital subscriber line transceivers/ PHY management) is quite an interesting document if you have time to sit down and digest it all.   

I attempted to several years ago purely because G.997.1 was central to the BT-v-ASSIA courtcase and BT/Openreach RAMBO boxes' use of something called ploam.   Ploam is what is referred to as the Q interface in G.997.1 

I'm not 100% certain, but I think (or at least thought)* it was impossible to over-ride some of these settings, which is why I always wondered how ASUS modems were managing to do what they do on the Openreach system.
 
However I believe it's possible that the modem could adjust GAIN and bitloading during show-time (ie after the modem has sync'd). It has to be able to do this because thats in part what bitswap process is about.  So my pause for thought moment yesterday, was that these modems could be messing with BER/GAIN/bitload perhaps during Showtime.  We know that part of the bitswap process, the modem can increase GAIN for specific tones.   Now suppose for example you forceably increased GAIN over the full range of tones, which would increase the dB power,  which in turn increases SNRM.


*Back to the 2 different types of DLM management I was talking about yesterday which I still cant recall the names of.   
Type 1 uses a target SNRm as a parameter, so it could be possible to get the modem to adjust the SNRM.   20CN/21CN uses this type of system.
Type 2 uses capping/banding rather than target SNRm as its primary parameter.  NGA uses this type of system and thus a straight forward over-ride of Target SNRM should be impossible on a DSM based system.

_______

As a side note, the courts threw out any breach of patent claims on the type 1 system.  ASSIA won for type 2, but from what I read it was more to do with the ILQ aspect. ILQ being the DLM status colour which I long ago explained and we knew about before the ASSIA couft case. 
Reading between the lines ASSIA's successful claim was based on the fact the NGA system monitored some data via RAMBO to get information for ILQ to decide when to change DLM parameters.   
 - ASSIA systems also use Type 2 DLM and DSM
 - Courts dismissed claims of patient breach on ILQ method used on BT's Type 1 DLM, because it was general practice
 - Since the court case Openreach DLM for removal of banding/capping is borked, yet ILQ continues to work for INP.


Title: Re: VDSL - tones used in each band & why
Post by: GaryW on December 29, 2017, 12:12:14 PM
Just noticed this. Not looked at your stats.. but aren't those particular tones the usual guard bands or at least near to?  DSL uses stop bands between the different sub-channels.   Its not unusual for the DSLAM to add guard bands either side of the stop band.   Different DSLAM manufacturers may use slightly different guard and band plans.

Whilst the defined stop band between U0 and D1 may be tone 32,  all DSLAMs add further guard bands to either side of the stop band to prevent cross-talk between up and downstream.   

That's the weird thing, they aren't that close to the guard bands, certainly at the high end.  Looking at other people on ECI cabinets, their DS1 starts in the 40s rather than the high 60s and runs through to the upper limit of 857 rather than stopping at 785 as I'm seeing.  A good comparison (as we're both on 15k sync speeds) is probably scarab.