Gabriel Mahia Systems · Power · Strategy

Technology and Power II — The Algorithm as Governance

Algorithmic systems make consequential decisions about people's lives. That makes them governance, regardless of what their operators call them.

When Algorithms Govern

Governance is the set of processes through which consequential decisions about people's circumstances are made. Historically, governance was the domain of formal institutions — governments, courts, regulatory bodies — whose decisions were subject to legal constraints, democratic accountability, and due process requirements that protected the people subject to those decisions. The emergence of algorithmic decision systems has created a new category of consequential decision-making that operates with the scope and impact of governance but without its accountability architecture.

The algorithm that determines which job applicants are screened out before a human reviewer sees their application is making consequential decisions about people's employment prospects. The algorithm that determines which social media content is amplified and which is suppressed is making consequential decisions about what information people encounter and what public discourse looks like. The algorithm that determines creditworthiness, insurance premiums, or bail recommendations is making consequential decisions about people's economic and legal circumstances. These are governance decisions in every meaningful sense — they determine the distribution of opportunities, resources, and constraints among populations — and they are currently made without the accountability mechanisms that other forms of governance are subject to.

The Accountability Architecture Gap

The accountability architecture that governs traditional decision-making — the requirement to state reasons, the right to challenge adverse decisions, the prohibition on discriminatory criteria, the obligation to apply consistent standards — was designed for human decision-makers operating within formal institutional frameworks. Algorithmic decision systems are typically neither human nor formally institutional in the relevant sense: they make decisions at scale, at speed, and through processes that are opaque to the people subject to them and often to their operators as well.

This opacity is not merely a technical feature — it is a governance design choice that insulates algorithmic decisions from the accountability that would otherwise apply. The operator who cannot explain why the algorithm produced a specific output for a specific individual has not merely failed to communicate clearly — they have implemented a governance system without the accountability architecture that governance requires.

The algorithm that makes consequential decisions about people's lives is a governance system. Calling it a product, a tool, or a recommendation engine does not change what it does. The accountability gap it creates is not a technical problem. It is a political choice about whether the people subject to algorithmic decisions have the rights that subject to any other form of governance would entail.

Discussion