Just a thought . . .
The two TDx entities, where x is either D or M, operate on bit-streams and not byte-streams. (That is my understanding . . . Unless I've remembered things incorrectly.)
I suggested earlier that, for TDD to acheive efficient overall bandwith utilisation, the data would have to be sent in bursts. I would argue that the same appiies, whether it be bursts of bytes or bursts of bits.
Inspired by further reading I am growing attached to my theory, based upon the idea that TDD overall performance would be a trade off between optimal bandwidth usage vs latency. Early ADSL specifications targeted lines that were generally longer than typical G.Fast lines, hence longer propagation delays. That would have tilted the trade off in favour of bandwidth and thus be consistent with my notion that TDD would have been distinctly sub-optimal for early ADSL.
A search for a combination of terms such as ‘TDD FDD Latency’ yields many results suggesting that TDD inherently has worse latency than FDD. But I can’t claim that proves my theories to be correct and I will resist providing any links, as these links tend strongly to be describing radio technology rather than wired DMT. Subtly different, so the same arguments may not apply.