The governance gaps that algorithmic systems create are not technical failures. They are deliberate design choices whose consequences are borne by the people subject to them.
The Governance Gap Is Not Accidental
The governance gaps created by algorithmic decision systems — the absence of explainability, the lack of meaningful appeal mechanisms, the immunity from anti-discrimination requirements that govern equivalent human decisions — are structural features reflecting deliberate choices by system designers, deploying institutions, and regulatory frameworks that together produce the specific accountability absences that characterise algorithmic governance. The choice to deploy an algorithmic system that makes consequential decisions without retaining the ability to explain any individual decision is a choice that was made by someone who weighed the cost of explainability against the performance or cost benefit of the unexplainable system and chose the latter.
Designing for Accountability
Designing algorithmic systems for accountability — for the ability to explain individual decisions, allow meaningful challenge to adverse ones, and identify and address systematic bias — is technically feasible in most domains where such systems are currently deployed without these features. These are the costs of governance — the price of operating a consequential decision system with the accountability that its consequential character requires.
The algorithmic system that makes consequential decisions without accountability has not avoided the governance challenge — it has shifted the governance cost from the institution deploying the system to the people subject to its decisions. That is a governance choice, and it should be made explicitly rather than by default.
Discussion