Gabriel Mahia Systems · Power · Strategy

The Metrics of Neglect

Institutions normalise preventable loss by measuring the wrong things with great precision.

Precision Without Direction

Measurement precision is not the same as measurement accuracy, and the distinction is consequential. Precision means the ability to produce reliable, reproducible values for a defined indicator. Accuracy means the ability to produce indicators that reflect what actually matters about the phenomenon being managed. An institution can be highly precise — producing consistent, auditable, methodologically rigorous data — while being fundamentally inaccurate in its measurement of its own performance.

The metrics of neglect are precise indicators of process performance that are accurate measures of nothing relevant to the outcomes the institution was built to produce. They are generated by institutions that have invested in measurement infrastructure, staffed analytical functions, and built reporting systems — but have built that infrastructure around indicators that are measurable rather than important. The result is a management system that is data-rich and outcome-blind, that produces high-quality evidence of its own activity while the losses it was designed to prevent accumulate invisibly.

How Neglect Metrics Form

Neglect metrics form through a process that has institutional logic at each step even though the aggregate result is dysfunctional. Organisations under performance pressure require evidence of performance. Evidence of performance requires measurement. Measurement infrastructure is most efficiently built around indicators that are already being collected — administrative data, activity counts, process outputs — rather than around the outcomes those activities are supposed to produce, which are frequently harder to measure, longer to observe, and more expensive to attribute.

The early choice to measure available indicators rather than important ones is individually rational and systemically path-dependent. Once the measurement infrastructure exists, the indicators it was built to measure become the organisation's de facto definition of performance. Resources are allocated toward activities that move the measured indicators. The performance management system develops around the available measurements, creating layers of targets, incentives, and accountability relationships that are all oriented toward indicators that may have little connection to the actual outcomes stakeholders care about.

What Gets Normalised

When an institution's metrics accurately track its process activity but fail to track its impact on the outcomes it was designed to affect, the outcomes can deteriorate steadily without triggering any internal alarm — because all the alarms are connected to the process metrics, which continue to look fine. The loss is preventable in the sense that the institution has the mandate, the resources, and in many cases the operational capability to address it. It is not being prevented because the loss is not appearing in the metrics that trigger management attention and resource reallocation.

The Way Out

The organisations that have successfully replaced neglect metrics with outcome metrics have consistently done so through external pressure that created a constituency for change with more influence than the constituency defending the existing system — funders conditioning resources on outcome evidence, regulators requiring outcome reporting, communities demanding accountability for results rather than activities. Internal reform alone, without external pressure, rarely overcomes the structural inertia of embedded measurement systems.

Organisations that measure activity instead of impact do not fail to notice their failures — they produce high-quality documentation of them, filed under metrics that say everything is fine.

Discussion