On top of @Chrys' reply...
If rate-limiter info could be successfully propagated all the way back in a chain to the source, then there would be no packet loss.
Packet loss isn't bad!
The problem on the internet is that the
thing that is rate-limited isn't always your ADSL connection. It could happen at any point in the path from A-B, any individual sub-link. If rate-limits get propagated back to the source, then it really needs information about every individual sub-link en-route from A to B.
Of course, connections A-C and A-D will exist in parallel, so those would all need tracking in parallel.
Looking at it in reverse, then, "the source" will be sending to hundreds or thousands of connections, and would have to be tracking limits for dozens of sub-links on each.
All of these intermediate links are running dynamically, so congestion could come and go. As could the supply and demand of any one active service (5Mbps for 5 minutes, then dropping to 2Mbps for 20 minutes).
In the end, there's just too much to keep track of to make a co-ordinated solution work.
The best solution is, once near congestion on an intermediate link, to just throw things away. And hope that the TCP implementations will back off a little. Things work somewhat better if the process of dropping packets is done in a relatively fair manner.
The older solution was to queue packets. Then to add bigger buffers to support bigger queues. Until they discovered this was bad for latency...
Internet engineers had created the phenomenon known as bufferbloat.
Bufferbloat introductionFrom that page, there is a quote that is well worth remembering:
We can start by understanding how we got into this mess; mainly, by equating “The data must get through!” with zero packet loss.
Hating packet loss enough to want to stamp it out completely is actually a bad mental habit. Unlike real cars on real highways, the Internet’s foundational TCP/IP protocol is designed to respond to crashes by resending an identical copy when a packet send is not acknowledged. In fact, the Internet’s normal mechanisms for avoiding congestion rely on the occasional packet loss to trigger them. Thus, the perfect is the enemy of the good; some packet loss is essential.
The solution to bufferbloat is to stop using the buffers, and to start using a method that employs packet loss in a controlled manner. And one of the most important features is that this "packet loss in a controlled manner" can happen at any link in the network without complicated tuning to cope with other parts of the network. It doesn't attempt to coordinate or propagate information around ... the lost packet is, in fact, the "thing" that is propagated.
That page links to a set of videos that help explain bufferbloat too, and an IETF demo of the different algorithm (
Codel or FQ_Codel) in operation.
I note that it includes videos on a history of network queueing theory.
Bufferbloat videos