It is an absolute nuisance that the Brick needs to be told what rate to police the packets at. I am told by RevK that the Firebrick does not actually pump the packets out at a particular rate, and does not hold them back until the correct release time, which is a real shame. It sends packets to the modems straight away. RevK says it merely polices the rate, imposing a maximum rate. It must also split the traffic in the correct fractions between links though, because otherwise I can't see how it would ever work as well or as sensitively as it does.
It is highly error-prone as it is though and needs constant checks. AA gets feedback from BT and TT regarding the line rates and that drives the routers regarding downstream.
If there were a modular software subsystem that could grope some nasty interface exposed by modems, html if really desperate, and find out what the current sync rates upstream are, then there would also need to be the black magic of what I call the 'fudge factor' - a parameter which I calculate to convert sync rate into an IP data rate by using knowledge of the particular protocol stack and its parameters relating to header bloat, PDU size and ATM overheads.
I have not managed to completely automate the process, although it is semi automated, the rate calculations are done by software, and the XML snippet required is generated automatically, the last step remaining is to integrate the Xml snippet with the correct rates in it into the rest of the XML config for the Firebick, because I have a tool already working that uploads a new XML config file into a running Firebrick and causes it to take effect immediately. I can't complete the whole chain because of stupid bugs in some of Apple's Workflow tools that I have used to write the whole lot in and I just cannot get them fixed.