adsl 1 can utilise up to 1.1 MHz
adsl2+ can utilise up to 2.2 MHz
- Each of the frequencies are split into sub-channels of 4.3125 kHz which is the 'tone' that you see on DMTtool.
- The tone is actually a carrier bin - you may also see this referred to as a bucket because this is what carries the data bits within each frequency range.
- The amount of data bits that can be carried in the bin can vary depending upon the quality of the signal at the particular frequency range for that particular bin.
- The full frequency range is split - regardless if you can make use of those frequencies or not.
adsl 1 has 256 tones
adsl2+ has 512 tones
- Some tones are not used such as the pilot tone, some tones are reserved for voice or to prevent overlap of the different signal types, whilst some tones are just pure and simple not used because the signal received at that frequency is too weak.
For the next bit I'm talking SNR per channel - not the SNRM that is displayed on your router. Forget all about the SNRM for this next part of the explanation.
- The better the SNR at that frequencies in the sub-channel range, then the more bits that can be allocated to that particular carrier bin .
- If the signal is good then 15 bits (maximum) can be allocated to that tone.
- If the SNR is weak/weaker at a particular frequency range, then not as many bits can be carried by the tone.
- Each 3dB of SNR equates to 1 bit (of data), but a minimum of 2 bits per bin is needed for the tone to be usable (6dB).
- If there's insufficient SNR in the channel then the carrier bin is marked by the router as unusable.
Capacity per tone surely cannot be equal. It seems fundamental to me that a low-number tone of around 150kHz is capable of carrying a LOT less data than a high-number one of over 2000kHz.
This is were you (or I) may be getting confused, because AIUI capacity per tone is equal. Each tone is capable of carrying up to 15 bits regardless - it doesnt matter what the frequency of that tone is. The amount of bits actually allocated to the bin, purely depends on the SNR for that particular bin.
As an example:
Say you have a bin at tone 96 @ 414kHz that has an SNR of say 50dB, then thats more than enough SNR to allocate the full 15 bits to the carrier.
Now say at tone 128 @ 552kHz, theres some noise broadcasting at the same frequency, and the SNR is only 30dB, then a maximum of only 10 bits can be encoded on that channel.
But the next frequency band up may be fine and back up at 50dB so 15 data bits are encoded on that tone.
However, Bit allocation is not actually quite as straight forward as in the above example and there's more to it during the sync negotiation period which has to cover an allowance for errors as defined by the Bit Error Rate (BER) and involves a fairly complicated process called Quadrature Amplitude Modulation (QAM) which is way too deep for me I'm afraid, and this is what determines the final sync speed. Somewhere in that process is the required overhead for Interleaving and/or more correctly Error Correction, and of course the Target SNR which sets some sort of base line... but...
- The QAM rate is said to be 4,000 symbols per second, therefore each 3dB of SNR available in the sub channel over the base line will give approx 4kbps of sync speed, subject to a maximum of 60kbps (15 x 4kbps) per carrier.
Once a line has sync'd a Bit Allocation Table (BAT) is defined which is what specifies how many bits are used/can be used within the sub-carrier channel.
If after sync the SNR within a specific channel falls too low to transmit x no of bits, then bitswapping allows 'spare' in other channels to be used, whilst still maintaining the same number of total bits in the BAT.
The higher frequencies tend to carry less bits purely because the SNR isnt as good for those channels. Higher frequencies are more likely to be attenuated therefore the SNR isnt as good, therefore the carrier bins for those tones cant carry as many data bits.
With rate adaptive dsl, its the SNR of the sub-channels which will determine your sync speed not the frequency of the tone. As long as the SNR at that particular frequency is good then modulation will allocate x no of bits to the bin regardless if its a high or low frequency.
Lines which are more attenuated will see SNR decrease more rapidly at the higher frequencies hence less bit allocation overall and less actual sync speed.
Now to speed.
Speed is the amount of databits you can carry per second yes? That bits easy - The more bits that you can carry per second then the faster you can go.
Therefore the more bits that have been allocated at negotiation time to each individual tone, the faster your available sync speed.
Some routers will show you the amount of bits that are allocated to each individual tone.. you can see how many total bits can be carried at any one moment in time from its bit allocation table (BAT).
1. Bits per tone; yes but bits per tone per what? (e.g. millisecond - seems unlikely) i.e. how do you translate that number, displayed in DMT graphs for example, into actual transmission capacity (data rate) per tone?
If Im wrong on the next bit please correct me, as I very first worked this out by reverse engineering.
To work out your available bit rate.. you'd need to work out the total amount of bits that can be carried across all the tones. (Some routers will show this).
Then:-
(available) Bitrate = Sync speed / Total bits available to be carried across all the bins from the BAT
These are the actual figures from my line right now:
(23348kbps / 5627bits) = 4.14 kbps.
So each bit I can transmit is sent at 4.14 kbps. Now the one thing Im not sure about is that if everyone's is the same (re interleaving and the base line calculation).. but on all of my stats (and whether they are upstream or downstream), my bit rate always turns out to be around 4kbps).
But this does co-inside nicely with the supposed defined QAM rate of 4kbps.