AI Systems and Liability: An Assessment of the Applicability of Strict Liability & A Case for Limited Legal Personhood for AI

Main Article Content

Louisa McDonald


Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about whether conventional liability laws can be applicable to AI systems which manifest a high degree of autonomy. Users and developers of such AI systems may meet neither the epistemic (sufficient degree of awareness of what is happening) nor control (control over the actions performed) conditions of personal responsibility for the actions of the system at hand, and therefore, conventional liability schemes may seem to be inapplicable[1].

The recently adopted AI Liability Directive [2022] has sought to adapt EU law to the challenges to conventional liability schemes posed by AI systems by imposing a system of strict, rather than fault-based liability, for AI systems. The goal of this is to be able to more easily hold developers, producers, and users of AI technologies accountable, requiring them to explain how AI systems were built and trained. The Directive aims to make it easier for people and companies harmed by AI systems to sue those responsible for the AI systems for damages. However, the Directive seems to ignore the potential injustice that could result from producers and developers being held accountable for actions caused by AI systems which they are neither aware of nor have sufficient control over.

 In this essay, I will critically assess the Directive’s system of fault-based liability for AI systems and argue that, whilst such a system may confer some instrumental advantages on behalf of those suing for damages caused by AI systems, it risks causing injustice on the part of developers and producers by making them liable for events they could neither control nor predict. This is likely to risk both producing unjust outcomes and hindering progress in AI development. Instead, following Visa Kurki’s analysis of legal personhood as a cluster concept divided into passive and active incidents, I will argue that some AI systems ought to be granted a limited form of legal personhood, because they meet some of the relevant criteria for active legal personhood, such as the capacity to perform acts-in-the-law. The legal personhood I propose for AI systems is a kind of dependent legal personhood analogous to that granted to corporations. Such a form of legal personhood would not absolve developers and producers from liability for damages (where such liability is applicable), but at the same time, it would not risk unjustly holding producers and developers liable for actions of an AI system.

[1] Mark Coeckelbergh, "Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability." Science and Engineering Ethics, (2020): 2054 

Article Details