The next step was to question the Firebrick itself, as its port operations haven't been investigated. I swapped the Ethernet cables between modems #1 and #3 (line @a.4) where they go into the Firebrick's ports.
Amazingly this has almost 99% cured the problem, with a microscopic amount of dripping blood just before every hour. This time pattern suggests a software bug perhaps.
I looked back at the Firebrick software upgrade status, and I noted that there had been a software upgrade a few days before the onset of the fault, but there's no obvious way to explain with the time gap between the installation and the start of the fault.
One other thing I should look at is the network cables between Firebrick and modems just in case they could have been damaged.
So the current state is that the dripping blood is spectacularly reduced, down to acceptable levels. Because the cables are into the wrong ports compared with the line identifiers @a.1, @a.3 etc, I will have to clean up somehow.
At the moment, the Firebrick config is wrong, because it specifies an upstream traffic limit in bps for each line, and these have got scrambled. Line 1 has traditionally had a rather higher upstream traffic allowance than the others, this is now incorrectly assigned to line @a.4 and vice versa. I wonder if this upstream rate limiting system could have something to do with it. It would have to have been hacked in the most recent release.