Kitz Forum

Broadband Related => Broadband Technology => Topic started by: Weaver on June 09, 2020, 10:06:25 PM

Title: Time division multiplexing for tx vs rx
Post by: Weaver on June 09, 2020, 10:06:25 PM
When ADSL was originally standardised in G.992.1, why was frequency division between tx and rx chosen, not flexible time division between tx and rx ? Is the latter not how G.Fast works ?

I presume that once G.992.1 had gone that route then we were stuck with it continuing into later standards because of the demands of interoperability with older standards in a crosstalk situation on neighbouring copper lines? Not sure about how G.Fast manages to be different then, if my memory serves.
Title: Re: Time division multiplexing for tx vs rx
Post by: burakkucat on June 09, 2020, 10:25:51 PM
A frequency division duplex circuit operates in full duplex mode. A time division duplex circuit, with a 50:50 time split, operates in half-duplex mode.
Title: Re: Time division multiplexing for tx vs rx
Post by: Weaver on June 09, 2020, 10:43:43 PM
Indeed, that was my understanding of The terms’ half duplex vs full duplex associations. I was not assuming anything about the time division split. Am I right in thinking that G.Fast is variable time division split ? (Me being too lazy to have already read the G.Fast standards doc.) A fully flexible split where, if there is nothing to transmit, then the link is turned around near-instantly, that would seem to give maximum efficiency and performance. If there’s no data to transmit in one direction then in such a hypothetical design continuous flat-out tx in the other direction should be possible at nearly 100% time usage.
Title: Re: Time division multiplexing for tx vs rx
Post by: burakkucat on June 09, 2020, 11:00:42 PM
I believe that the ITU-T G.Fast specification states that the time-split can be from 10:90 up to 90:10 . . . if I am remembering correctly.

I have a vague memory that BT does not use a symmetric time split but favours the DS over the US.

Perhaps ejs or j0hn (or, indeed, other members) have a better understanding of the topic?  :-\
Title: Re: Time division multiplexing for tx vs rx
Post by: Weaver on June 10, 2020, 01:28:24 AM
Can the ratio be varied dynamically while the link is up? Or does the ratio need to be specified in some config and always kept at that (most extremely inflexible idea) but can at least be changed according to ISP, or can it be set differently each time the link comes up, or is it completely variable - alterable while the link is up at showtime?
Title: Re: Time division multiplexing for tx vs rx
Post by: burakkucat on June 10, 2020, 06:45:53 PM
Again this is what I believe, from reading how BT deploy such a service, it is a predefined and invariable configuration setting. I.e. no change is possible.
Title: Re: Time division multiplexing for tx vs rx
Post by: sevenlayermuddle on June 10, 2020, 07:06:25 PM
Getting back to the original question as to why FDM was chosen over TDM for ADSL, or pedantically perhaps, FDD over TDD (second ‘D’ as in ‘Duplexing’)....

...isn’t ‘reduced latency’ a good answer?

Just guessing. :)
Title: Re: Time division multiplexing for tx vs rx
Post by: burakkucat on June 10, 2020, 07:30:45 PM
...isn’t ‘reduced latency’ a good answer?

Just guessing. :)

It may well have had some relevance to the duplexing method chosen, yes.

b*cat suspects that 7lm is now performing some diligent research on the topic.  ;)
Title: Re: Time division multiplexing for tx vs rx
Post by: sevenlayermuddle on June 10, 2020, 09:01:40 PM
b*cat suspects that 7lm is now performing some diligent research on the topic.  ;)

You’re not wrong.   

It’s just that ‘TDM‘ in most common usage that I’ve encountered (think E1/T1) meant something different to me, hence I became curious about the question.   Once the penny dropped that time division could also be used to control duplexing, the thought occurred that it might incur a penalty in latency.   After all, if you have to wait for a period of tine until your turn to send comes around, that waiting time must surely manifest as additional latency, or so I thought?

A few minutes on Wikipedia convinced me that the idea was worth sharing.  But a few minutes on Wikipedia does not make me an expert, hence my confession that I was ‘just guessing’.  :)

And I still am.  :D
Title: Re: Time division multiplexing for tx vs rx
Post by: burakkucat on June 11, 2020, 01:16:22 AM
To me, TDM is multiplexing "n" number of bi-directional circuits onto one bi-directional link, using time-slots of such a length (relationship to the frequency) so that to an observer of any one of those "n" number of circuits, it appears to be continuous. I.e. the time quanta are so small as to not be observable as discrete micro-events by a macro-object that is a human or feline being.

And TDD, to me, is the interleaving of two mono-directional data channels into on bi-directional channel between two points.

I wonder what CarlT's view of this topic might be . . .  :-\
Title: Re: Time division multiplexing for tx vs rx
Post by: sevenlayermuddle on June 11, 2020, 06:59:08 AM
Clinging to my hope that I might justify my guess...

I am reasoning that a TDM data stream in a single direction can be split into time slots that are as short as you like.  Each sequential byte on the wire might belong to a different stream of data so that when the receiver reconstructs them, the data appears to be continuous.  Each individual stream arrives a rate that is simply a fraction of the total bandwidth, without any significant added delay.

When TDM is used for duplexing, let’s call it TDD, I’d not have thought a byte by byte multiplex would work so well.   After sending a byte you’d have to wait for the propagation time until it arrived, and then wait a bit longer til the other guy’s hardware can switch his hardware from rx to tx, before another byte can be put on the wire.   For that reason, I’d have expected TDD to operate in terms of bursts of data rather than single bytes, and hence the units of time may cease to necessarily be trivial.

If TDD does operate in bursts for the reasons I speculate above, then the burst size would be a trade-off between bandwidth and latency.   No such trade off exists with FDD hence I speculate that, to achieve the same combined bandwidth, TDD will have increased latency over FDD.

Genuinely interested now.  But still guessing. :)

Edit.. changed a few TDMs/FDMs to TDD/FDD.
Title: Re: Time division multiplexing for tx vs rx
Post by: Alex Atkin UK on June 11, 2020, 05:02:11 PM
I was under the impression that the big restriction with TDM was simply CPU power required to manage those time slots.

Its become more useful as processing power per watt has gone up dramatically.
Title: Re: Time division multiplexing for tx vs rx
Post by: burakkucat on June 11, 2020, 05:39:48 PM
On behalf of Weaver, the topic initiator -- "Happy to read and consider all views on this topic."
Title: Re: Time division multiplexing for tx vs rx
Post by: sevenlayermuddle on June 11, 2020, 11:41:51 PM
On behalf of Weaver, the topic initiator -- "Happy to read and consider all views on this topic."

I am very deeply flattered to hear that.   So nice to be appreciated. :)
Title: Re: Time division multiplexing for tx vs rx
Post by: burakkucat on June 11, 2020, 11:59:03 PM
Just a thought . . .

The two TDx entities, where x is either D or M, operate on bit-streams and not byte-streams. (That is my understanding . . . Unless I've remembered things incorrectly.)
Title: Re: Time division multiplexing for tx vs rx
Post by: sevenlayermuddle on June 12, 2020, 12:32:26 AM
Just a thought . . .

The two TDx entities, where x is either D or M, operate on bit-streams and not byte-streams. (That is my understanding . . . Unless I've remembered things incorrectly.)

I suggested earlier that, for TDD to acheive efficient overall bandwith utilisation, the data would have to be sent in bursts.  I would argue that the same appiies, whether it be bursts of bytes or bursts of bits. :)

Inspired by further reading I am growing attached to my theory, based upon the idea that TDD overall performance would be a trade off between optimal bandwidth usage  vs latency.   Early ADSL specifications targeted lines that were generally longer than typical G.Fast lines, hence longer propagation delays.  That would have tilted the trade off in favour of bandwidth and thus be consistent with my notion that TDD would have been distinctly sub-optimal for early ADSL.

A search for a combination of terms such as ‘TDD FDD Latency’ yields many results suggesting that TDD inherently has worse latency than FDD.  But I can’t claim that proves my theories to be correct and I will resist providing any links, as these links tend strongly to be describing radio technology rather than wired DMT.  Subtly different, so the same arguments may not apply. :-\
Title: Re: Time division multiplexing for tx vs rx
Post by: Weaver on June 12, 2020, 11:42:25 AM
[I am back now, was completely exhausted, couldn’t wake up.]

I think I should indeed have used the term ‘duplexing’ or ‘duplex division’ or something, the latter perhaps because ‘duplex’ isn’t a verb, so I’m not sure I believe in the word ‘duplexing’ but it might be sufficiently useful to forget the rules.

What controls latency here and what values are we talking about? And in the case of G.Fast, what changed from VDSL2 ?
Title: Re: Time division multiplexing for tx vs rx
Post by: sevenlayermuddle on June 12, 2020, 12:09:14 PM
I don’t think there was technically incorrect with using TDM in this context. 

I agree with the today’s wikipedia page

https://en.m.wikipedia.org/wiki/Duplex_(telecommunications)
Quote
Time-division duplexing (TDD) is the application of time-division multiplexing to separate outward and return signals. It emulates full duplex communication over a half duplex communication link.

I personally prefer the more explicit TDD, simply because my brain had become lazily accustomed to equating TDM with encoding of multiple streams in a single direction, rather than encoding of multiple (two) directions.   The realisation that there was more to it than that is what drew me to the thread. :)

I note the same wiki page is among online resources that broadly support my hypothesis regarding latency, with a fleeting mention that TDM/TDD has “greater inherent latency”. But again, that text seems to be more in the context of radio transmissions, so I still can’t claim it proves me right.  Not that I’d claim that Wikipedia proves anything, ever, of course.

Glad to hear you are feeling better. :)
Title: Re: Time division multiplexing for tx vs rx
Post by: niemand on June 12, 2020, 05:30:35 PM
Mmm.

Cost and complexity. It did what it needed to do - compete with highly asymmetrical cable networks, carriers also using FDM, at the right price point. Separating the upstream and downstream was trivially done with passive filters.

A reminder that neither bytes or bitstreams are seen at physical layer but over analogue transport such as xDSL QAM symbols - a combination of an I and a Q carrier in quadrature with various possible power levels being applied to both to produce various densities of bit loading in each symbol.

The symbol rate, not how many bits are inside there, is what physical layer things care about more. G.fast doesn't adjust timings depending on the spectral density in each direction for instance, it's a fixed number of symbols in each direction and whatever order modulations can be crammed in there each way.
Title: Re: Time division multiplexing for tx vs rx
Post by: niemand on June 18, 2020, 02:10:50 PM
Blimey. Everyone must be happy with that.  :lol:
Title: Re: Time division multiplexing for tx vs rx
Post by: sevenlayermuddle on June 18, 2020, 07:09:44 PM
Blimey. Everyone must be happy with that.  :lol:

Probably attributable to my own legendary stupidity, but I am failing to understand that comment.  ???
Title: Re: Time division multiplexing for tx vs rx
Post by: Weaver on June 18, 2020, 09:33:08 PM
Ditto, what 7LM said.
Title: Re: Time division multiplexing for tx vs rx
Post by: niemand on June 21, 2020, 12:13:25 PM
It was me lightheartedly noting that no-one had posted in 6 days on the thread to either concur or tell me I was talking nonsense :lol:

More seriously it takes a bunch of active hardware to do TDM while FDM can be done with passive circuitry: bandpass filters are all you need for that.

Nowadays of course the really cool kids don't even need to do TDM - DoCSIS 4.0 in full-duplex mode uses the same frequencies at the same time to do both Tx and Rx. I fully imagine that that'll be where any future standards of xDSL have to go too, especially given the vastly more limited spectrum available over twisted pair versus coaxial.