To be strictly accurate you have to say how much of what it is you’re sending. I would go for IP PDUs; which means that you count and include the IP headers themselves and everything else. You need to say whether you mean IPv4 or IPv6. If using TCP then you also have to include IP and TCP headers, and say whether you’re including TCP timestamps on or off, and then there is the slowness associated with the TCP protocol itself; when a packet is dropped, TCP slows down and takes some time to recover to full speed if the system is less than ideal. A good setup will work out the bottleneck link speed and keep close to that rather than overloading the link then backing off and cycling giving a sawtooth-shaped speed vs time graph on a detailed speed checker. That overhead is difficult to estimate as it depends on the TCP implementations in use at both ends, the kind of path through the internet, time delays, and the amounts of buffering at the end-nodes and intermediate modes.
The 940 Mbps thing might have come from a TCP payload-only figure as reported by some speed tester. But if it is counting TCP payload only, then the true link performance is rather greater because the overheads of IP+TCP headers have been forgotten, as well as not accounting for the slowness of say the TCP protocol itself. They probably think that TCP payload is how the ‘man in the street’ thinks of ’speed’. Using a UDP-based carefully designed link performance measuring tool is the right way to measure the link, not the software. It would be interesting to try iperf3.
Actually you could argue that one should add in the overhead of ethernet headers too, if the FTTP service is delivering ethernet not IPv4 and IPv6 alone.