Experts in top team and board consulting, training and development
Blog
Posted by Andrew & Nada on 17th June 2021
System Error: AI threatens self-governance at the expense of business

From the Post Office scandal to chatbots creating their own language, Artificial Intelligence is in danger of becoming ungovernable without urgent human intervention, say Professors Nada and Andrew Kakabadse of Henley Business School.

Businesses are facing an Orwellian nightmare that few even recognise exists. As they become increasingly dependent on Artificial Intelligence, these same technology systems are increasingly being run without any human intervention.

By way of example the governance of blockchain technology is set to leave organisations distinctly vulnerable as algorithmic blips in the system are likely to remain undetected because a counter algorithm oversees them.

There are many lessons to be learnt from the long-running Post Office scandal.

Between 2000 and 2014 the Post Office prosecuted 736 sub-postmasters and postmistresses, an average of one a week, based on data compiled by a computer system called ‘Horizon,’ a Fujitsu development first installed in 1999.

According to the resulting High Court ruling the Horizon system contained “bugs, errors and defects.” It was identified as posing a material risk to some 2,400 postmasters and postmistresses through its handling of in-branch accounts.

Victims prosecuted on IT evidence alone
Those victims unable or unwilling to pay these supposed shortfalls were prosecuted for theft, false accounting and in some cases fraud, based on the IT evidence alone and without proof of criminal intent.

The Post Office management did not accept responsibility for any supposed system error, despite one sub-postmaster reporting concerns in 2000, as did a number of IT professionals external to the Post Office.

It took 20 years, numerous failed investigations and a class action civil litigation suit by 550 sub- postmasters and postmistresses to acknowledge the innocence of the claimants. The BBC caricatured the convictions as “the UK’s most widespread miscarriage of justice.” This is just the tip of the iceberg. In the case of the Post Office, those involved potentially reached up to the level of the Minister.

What if similar malfunctions occur in the future and once again bypass all human intervention? Error in an AI system may be far more damaging and much harder to detect.

When making a decision, AI relies on inbuilt algorithms and a massive amount of data which it processes to arrive at certain ‘conclusions.’ For an AI system to perform effectively, it is critical to create an environment which is as unambiguous and predictable as possible. This, of course, assumes that the algorithms invoked are unbiased from inception.

In real life, this is a major challenge to recreate artificially. Humans have a safety net that enables us to operate in uncertain and ever-changing environments as a way of coping with uncertainty and vagueness. Machines and software don’t.

Who controls AI learning?
For AI to deliver impeccable service in the human world, systems needs to learn to think like a human, which immediately raises the vexed question of who or what controls the AI-learning system?

The interaction between Facebook chatbots left to their own devices shows how systems can quickly develop a language all of their own that is incomprehensible to humans. This offers little danger in of itself.

However, having to debug AI-learning systems through a process of reverse-engineering is a long and arduous process. Imagine if the Post Office scandal had occurred without any humans being in the loop? How many more contractors would have been prosecuted and when, if ever, would such a miscarriage of justice come to light?

The governance of AI-learning systems is both a Board and government concern. The reason is fundamental - when the system is challenged, the result is an instinctual reaction to defend the system.

Independent stewardship of our institutions is needed now more than ever. The emergent AI world in which we increasingly operate requires an independent ‘Custodian of Information,’ with the function of undertaking investigations into suspected AI-led injustice. This should include the freedom to dig deeply into the context of each individual circumstance and report findings in an fair and equal manner.

In an environment full of of system errors overseen by AI, the governance of the future requires resolute humans who can pursue matters and use independent oversight. Unfortunately our present-day ‘compliance mentality’ is likely to stifle the creation of such a body as the reliance it creates would likely be reduced to completing yet another arduous but legally binding checklist.

This article first appeard in Board Agenda magazine.