Yes, it just works like it says on the tin. Upstream and downstream for me are better than 90% * n speed, downstream is exceptionally good, the more lines you have the worse it gets, four lines is quite a lot worse than three. For me upstream is not as good as downstream but that could be because my lines differ a lot in speed, one out of the four is exceptionally bad and it even copes with that.
One real nuisance is that with the Firebrick, you have to specify the rates for outbound traffic per line. I have been looking into this for years, and I have absolutely zero clue as to what is going on if you get these numbers wrong. It matters in that you need to get the figures right but if you do get them wrong in one direction or another then the results don’t make sense. I have spent eight years doing performance measurements. Some of the reason for confusion, I have now realised is that any corruption on packets really stuffs things up in this regard.
I use the following equation for the outband speeds, the rate limiters :
egress_rate = modem_loading_factor * protocol_efficiency_factor * sync_rate
where
protocol_efficiency_factor = ( the size of a chosen packet including all overheads due to protocols from eg ppp downwards ) / the original size; this is the same as the bloat factor due to all headers, trailers, stuffing, escaping, encoding and any other expansion factors
and
modem_loading_factor = some tweaking tuning number < 1.0 ; set by experimentation to give the best performance but possibly slightly reduced to improve latency or for other reasons, such as a reduction found by experiment to be helpful to voip
protocol_efficiency_factor : For FTTC/FTTC, I think the protocol_efficiency_factor = 96.69% normally, or else 91% if G.INP = high. Kitz has some numbers for efficiency of FTTC.
I use a value of protocol_efficiency_factor = 0.88 approx assuming a standard size of for an 1500 bytes for an IP packet. This comes from round_up((1500 bytes + 32 bytes of overhead ) / 48 bytes per atm cell payload ) = 32 atm cells; then protocol_efficiency_factor = 1500 / ((32 cells)* (48+5))
Now,
modem_loading_factor = 96.5% for me currently. Chosen after many months of testing.
So for you I might recommend trying a speed = 0.965 * 0.91 * sync_rate
I have a program that queries the modems and gets the upstream sync rate from each one. To do this, I use an ‘easy stats’ API provided by some code written by our esteemed forum member Johnson which is installed in the modems as part of his custom firmware. I then convert the sync rate to an egress rate to go into the speed attribute of <ppp speed=nnnn> in the Firebrick xml according to the previous formula; speed = mlf * protocol_efficiency * sync_rate. The program then puts out a snippet of Firebrick xml containing ppp elements with the correct upstream rate values in them, and embeds this in a complete template of the current Firebrick xml config, which is then uploaded into the Firebrick and takes effect immediately, without disturbing operations in progress.
The program to do this currently runs on an iPad, does the rate calculation and needs to be run manually when every speeds change. I could check for speed changes by polling which is nasty unless the rate is kept down. My iPad is of course not suitable for the job as a polling monitoring server. I should really rewrite the thing for my raspberry pi and let that do the monitoring and push the config changes into the iPad. I currently also have a quick iPad program that tells me yes or no have the modems’ speeds changed, to let me know that I may want to upload a new config.
I have a spreadsheet that does the efficiency calculations. I can post that if that would be useful.