Experts in top team and board consulting, training and development
Blog
©Mihnea Ratte-824
Posted by Nada on 13th May 2026
Governing the machines: Why AI legitimacy is becoming the boardroom’s biggest test

At the recent Future of Governance International Conference 2026 in Bucharest, hosted by ENVISIA, our panel, ‘Governing the Machines: A Live Boardroom Stress Test,” confronted a difficult reality facing modern boards: AI governance is rapidly becoming one of the defining governance challenges of our time.

And yet too many organisations still approach it primarily as a technical or compliance exercise, rather than a question of legitimacy.

The panel brought together complementary perspectives. Andreea explored governance architecture: Its structures, processes and the mechanisms boards are building to oversee AI. Stefan focused on trustworthy systems, examining the reliability, transparency and technical integrity of AI itself. My contribution addressed the issue sitting beneath both: Who actually has the moral authority to govern AI systems, and on what basis?

A legitimacy problem
Much of today’s AI governance discourse assumes that if boards establish the right framework, create an ethics committee or commission sufficient audits, governance naturally follows. But governance rarely fails because boards lack frameworks. More often, it fails because boards lack the legitimate authority to exercise ethical judgement over systems they neither fully understand nor meaningfully control.

This became the central provocation of our panel.

I argued that ethical authority can’t simply be delegated, although organisations delegate it constantly. Boards routinely outsource ethical judgement to vendors, algorithms, consultants and compliance functions.

These decisions affect employment, access to services, security, healthcare and finance, and are increasingly mediated by systems that few directors could confidently explain.

The critical issue, therefore, is not whether an AI governance framework exists, but whether boards possess both the capability and moral standing to exercise meaningful ethical oversight over decisions made in their name.

This distinction matters because governance without understanding quickly becomes theatre.

I discussed the difference between accountability and answerability. Accountability is structural. It identifies who appears on the organisational chart when something goes wrong. Answerability, however, is moral. It concerns who can genuinely explain and defend the reasoning behind a decision.

Most boards today are formally accountable for AI systems, but very few are answerable for them. This gap is where harm lives.

When an algorithm discriminates, excludes, manipulates or produces unintended consequences, organisations often retreat into procedural language: The model was tested, the framework was followed, the vendor complied with standards. And yet none of these responses addresses the deeper human question: Why was this decision right?

The inability to answer that question exposes a weakness at the centre of many governance systems. Increasingly, boards possess procedures without possessing ethical clarity.

All too often, ethics is treated as a constraint on governance, an additional hurdle after strategy and innovation. But ethics is not external to governance. It is governance’s only legitimate foundation.

The role of the board, therefore, is not merely to manage AI risk. It is to make decisions that can be morally defended to those affected by them, in terms those individuals would recognise as fair and legitimate.

That requires more than policies, audits and dashboards. It demands boards which are capable of genuine ethical reasoning: boards willing to confront ambiguity, wrestle with competing values, and accept responsibility for decisions that cannot simply be delegated to machines or technical experts.

As AI systems become increasingly embedded in organisational life, the challenge facing directors is no longer whether AI will transform governance. It already has. The real question is whether governance itself can remain meaningfully human.

I closed the panel with a challenge to directors in the room: “Ask your board not, ‘Do we have an AI policy?’ but ‘Could we stand in front of the people our AI has affected and explain why the decision was right?’ If you cannot, you lack governance. You have paperwork.”