I made some actual measurements using the rate counters available from AA’s clueless.aa.net.uk server. It gives upstream/downstream rate samples and min/avg/max latency figures based on timing the PPP LCP echo request (‘ping’-like) CQM test messages that it sends every second or so.
Firebrick Individual tx measured
Sync Load Limitr measure meas/lim 1-m/l mtx1 mtx2 mtx3 mtx4 mtx5
1 528 0.95 443.632 437.200 0.985501 0.014499 437.0 436.7 437.9
2 519 0.95 436.070 425.633 0.976067 0.023933 424.7 425.9 426.3
3 376 0.90 299.292 296.900 0.992008 0.007992 296.5 297.8 295.8 296.5 297.9
4 499 0.95 419.265 409.933 0.977743 0.022257 408.9 411.2 409.7
The ‘measure’ column is the measured upstream rate for each line and the ‘Limitr’ column is the intended rates, the rates that are supposed to be imposed by the Firebrick’s upstream rate limiters. All figures are kbps. The rightmost columns are individual speed tests and these were averaged to give the ‘measure’ column (arithmetic mean).
This is an idea that I had which has turned out to be very successful, using a much lower modem loading factor on the slowest line. Note here that line 3 is 90% whereas all the others are 95%. In various tests, this arrangement proved to be superior to all-equal modem loading factors, bearing say the all-96% arrangement.
The results:
* You can see that the true ‘measure’ rates are a bit off, but not hugely. The ‘measure/lim’ column shows the ratio of the two and there is 1-the_aforementioned as a mental aid.
I should perhaps do this again using max() instead of averaging the data points, that would give an idea of the links’ true capacities with the effect of limiters included too if they are kicking in. I thought that the effect of limiters might be to have some overshoot and hunting so that’s why I averaged the data points, because I wanted to see what effective rate the limiters were really trying to find and what the real long term data rate was. However I see now that both would be good.
* I can’t see much of a pattern in the aberrations in ‘measure/lim’. Maybe some are a bit low because I am running the modems ‘too hot’ and the modems themselves are limiting throughput not just the Firebrick.
* I can’t see a lot to be gained here by tweaking loading factors, the percentages. I calculate there is only perhaps ~50kbps more to be had from that type of twiddling, if very very lucky. But I wouldn’t know how to proceed anyway.
It took about 15 mins to do a backup of my iPad Pro to the Apple iCloud network file system yesterday morning. Felt like forever. Uploading pictures is a nightmare.
* It means anyway that the load splitting thing in the Firebrick really does work and throughput is good.
* I still don’t know what the modem load factors should all be exactly, in that I haven’t say raised them all a bit to try and reclaim some of the last remaining speed, if there even is any to be had, and done this kind of true throughout measurement. I don’t know if 100% or 99% actually doesn’t work at all, or does work but is over the top so that 100% is no faster than say 98%.
* Those numbers are based on calculation of protocol overheads and there may be other real-world timing overheads as opposed to bits bloat overheads, which I don’t know about. I don’t know about the effect of PhyR and what if there is a variable slowdown if it is having to work harder or less hard? A high error rate means so many DTUs get L2-retransmitted n [?] times. I haven’t accounted for that and if it’s variable then I would just have to try and guess. Maybe input buffering in the modem would just handle that anyway and I shouldn’t care about it. One other bad thing is the assumption used in the protocol overhead fraction calculation (to convert sync rate into IP PDU max theoretical rate) of a scenario with a certain size of packet being sent. I chose the max size IP PDU, 1500 bytes in my case, to give the most optimistic and favourable rate. Although that is bad, I have the modem loading factor to bring it down to reality as needed.
However short packets are much less efficient because of the huge overhead - 32 bytes in my case, which is for PPPoEoA, ATM etc. Also, since I am unlucky enough to be using ATM, there is the wobble in efficiency created by the 0-47 bytes unknown additional bloat from ATM cells. If the idiots who ordered the DSLAMs for G.992.3/992.5 had only insisted on PTM support as well, then I could have used that instead of ATM getting me 10% more speed straight away (minus the small overheads OF PTM). <dream>[And I’ll have a large fries, SRA and G.INP with my PTM too. Ah to hell with it, I’ll ‘go large’ too and have Annex I with that lot as well.]</dream> If there were an application that sent a lot of short packets back-to-back flat out then there would be potential trouble because my numbers would be miles out and a modem loading factor of 95% would not be low enough when protocol overheads could be 50% [!] for very short packets.
I don’t know enough about this, but a rate limiter could have parametrisation to allow it to convert for protocol overheads in a complex way if there were enough parameters.