Testing from different suppliers like the way you are is going to lead to inconsistent data.
There is a lot of variables such as.
Network conditions, and yes BT vs TT backhaul will play a part as well, for me if I switched to BTw backhaul,. my traffic would go north before going south to London as the BT network is not optimised for Leicester as an end point. I expect in other locations TT would have higher latency than BTw.
The device been tested on. e.g. a wired powerful desktop PC vs say a low end phone connected via congested 2.4ghz wifi and a weak signal to boot.
The software been used, dedicated app, vs chrome vs firefox vs something like speedtest-cli. There is a reason I use a good old cli ping command when diagnosing things like latency.
Also programming language and optimisations of the code.
Affects of intermediate software/devices proxies, anti virus, vpn etc.
The way latency is even measured, some will measure "during" the speedtest like tbb do, others measure during idle, some may add values to whats measured, strip off peaks and troughs etc.
Some may measure via udp, icmp or tcp. Different sized packets.
So I cannot give a reason why ookla gives a low result, but if its lower than what you get from command line ping, its probably been skewed by something.
Also check out dslreport's tester, that one is interesting as it lets you heavily customise how the test runs, you can pick the test server's used, the way the data download (fetch vs get etc.), how often the latency is measured during test, the timer used for measurement etc. and these can significantly affect the test outcome. Bear in mind as well chrome has added a ton of performance affecting features lately, they doing things like throttle timers, capping cpu utilisation of javascript etc. which can skew speedtesters.