Clinging to my hope that I might justify my guess...
I am reasoning that a TDM data stream in a single direction can be split into time slots that are as short as you like. Each sequential byte on the wire might belong to a different stream of data so that when the receiver reconstructs them, the data appears to be continuous. Each individual stream arrives a rate that is simply a fraction of the total bandwidth, without any significant added delay.
When TDM is used for duplexing, let’s call it TDD, I’d not have thought a byte by byte multiplex would work so well. After sending a byte you’d have to wait for the propagation time until it arrived, and then wait a bit longer til the other guy’s hardware can switch his hardware from rx to tx, before another byte can be put on the wire. For that reason, I’d have expected TDD to operate in terms of bursts of data rather than single bytes, and hence the units of time
may cease to
necessarily be trivial.
If TDD does operate in bursts for the reasons I speculate above, then the burst size would be a trade-off between bandwidth and latency. No such trade off exists with FDD hence I speculate that, to achieve the same combined bandwidth, TDD will have increased latency over FDD.
Genuinely interested now. But still guessing.
Edit.. changed a few TDMs/FDMs to TDD/FDD.