Ooops - sorry, I missed the question.
Not always. Indeed, in most of my professional life, most of the faults I've dealt with originate with the operator themselves, not the end-user. Appropriate use of OMC/NMS should give a proper overview of the statistics of large-scale behaviour.
Are you talking about the UK CPs tho?
I'm really talking about operators with their own infrastructure, and the network-management functionality to monitor it, or to perform self-test on it. Such operators are perfectly capable of observing a fault themselves, reporting it themselves, and fixing it themselves.
This is, of course, an issue way beyond the scope of just a fault with one copper access pair. But even that can be seen and fixed - BT have their own self-test hardware that can be deployed automatically.
My background is, however, more from the mobile world and seen from the equipment vendor perspective. At times, I've been involved with Vodafone, TMobile, and O2 in the UK, and plenty of other operators around the world. I still remember my first visit to one2one's network management centre, with many status screens across the wall - including all the indications of alarms from various pieces of hardware, or various links that had failed.
On the issue at hand, I would have thought that any decisions to trial, or subsequently roll out, G.INP on ECI DSLAMs would be run with monitoring of the resulting outcome. Some idea of how many subscribers got faster speeds vs lower speeds. Some idea of how many subscribers got fewer FECs vs higher FECs (same for ES, CRC etc). Given the delays, I'd expect people from Openreach, TSO and ECI to work hand-in-hand at both gathering those statistics, and analysing them for problems.
We know BT monitored the outcome for the Huawei estate, as we've seen graphs of a couple of outcomes:
http://postimg.org/image/bnxq1bpxd/