Can AI be criminally accountable?

This article was written by Theo Richardson-Gool for Legal Cheek and can be found at:

Artificial intelligence (AI) takes decisions away from humans, but who is accountable? Do different legal standards apply to AI?

Can AI be criminally accountable?

Is a human-based legal system fit for autonomous machines? Initially, it seems logical, for example, that a manufacturer of autonomous vehicles is held to account for any malfunctioning — where AI makes a decision which conflicts with our laws, such as running over a pedestrian to avoid injuring the passenger.

But is the manufacturer at fault if mens rea cannot be proved, after all, AI makes decisions autonomously from its manufacturer or programmers? Which raises the question, can AI have a guilty mind?

If the answer is no, how do you prove criminal liability because a guilty act needs a guilty mind in jurisprudence? In other words, it can be argued, under our laws,that AI lacks sufficient mental capacity to be guilty of a crime. This begs the question, do we need different legal tests for AI?

If a different burden of proof is required for AI, is greater oversight of the data input in the process of deep-learning, required?

Autonomous machines are trained, by being fed data which in turn affects the artificial neural network — but, the user has virtually no understanding of the decision-making process, from the input of data to the output of decision.

Sherif Elsayed-Ali of Amnesty argues, “we should always know when AI is aiding or making decisions and be able to have an explanation of why it was made.”

However, according to associate professor David Reid of Liverpool Hope University, this may not be possible, unravelling the reasoning process of AI is challenging because “the choices are derived from millions and millions of tiny changes in weights between artificial neurons.”

In other words, we may not be able to compute the reasoning process. Therefore, transparency of the data input is especially important to shine a light into the ‘black box’, which gives oversight, allowing us to decrease potential AI biases, and even re-program or educate AI so faults are minimised.

Replicating human bias in AI

Any system designed by humans is likely to reflect our biases. Humans have discriminatory preferences, however, do we want our prejudicial tendencies to be extended by AI? This is what happened in Britain when the police used facial recognition software that “through replication or exacerbation of bias” projected human prejudices into AI, which meant it discriminated against black people.

Concerns about AI amplifying existing bigotry is a real problem which can lead to ‘unintentional discrimination’. Dr Schippers called this the “white guy problem”, i.e. the fear of racial and gender stereotypes being perpetuated by AI systems in policing, judicial decision-making, and employment, inter alia.

Should AI promote diversification by also challenging our tastes, or is that misleading? Possibly, preference settings are an option, where we choose how much dissonance we want AI to have in our lives. A bit like setting the level of honesty you want when deciding on which news source to read.

Difficult questions regarding the ethics of AI and how it is being used arise in the process of adopting it. Several points are clear based on these findings –

First, greater considering needs to be given to whether AI can have a guilty mind or not.

Second, we need transparency at a design and programming stage, when considering the data being input, so we maintain some oversight — and best avoid any extension of human prejudices in AI.

Third, we need to consider creating an ethical AI system which guides general systems.

Lastly, when applying AI to social media and the internet, serious consideration needs to be given as to whether we want a system which perpetuates echo chambers, and affirms of habits and tastes, or not.

This article was written by Theo Richardson-Gool for Legal Cheek and can be found at:

Leave a Reply

Your email address will not be published. Required fields are marked *