AI Systems and Liability: An Assessment of the Applicability of Strict Liability & A Case for Limited Legal Personhood for AI
Main Article Content
Abstract
Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about whether conventional liability laws can be applicable to AI systems which manifest a high degree of autonomy. Users and developers of such AI systems may meet neither the epistemic (sufficient degree of awareness of what is happening) nor control (control over the actions performed) conditions of personal responsibility for the actions of the system at hand, and therefore, conventional liability schemes may seem to be inapplicable[1].
The recently adopted AI Liability Directive [2022] has sought to adapt EU law to the challenges to conventional liability schemes posed by AI systems by imposing a system of strict, rather than fault-based liability, for AI systems. The goal of this is to be able to more easily hold developers, producers, and users of AI technologies accountable, requiring them to explain how AI systems were built and trained. The Directive aims to make it easier for people and companies harmed by AI systems to sue those responsible for the AI systems for damages. However, the Directive seems to ignore the potential injustice that could result from producers and developers being held accountable for actions caused by AI systems which they are neither aware of nor have sufficient control over.
In this essay, I will critically assess the Directive’s system of fault-based liability for AI systems and argue that, whilst such a system may confer some instrumental advantages on behalf of those suing for damages caused by AI systems, it risks causing injustice on the part of developers and producers by making them liable for events they could neither control nor predict. This is likely to risk both producing unjust outcomes and hindering progress in AI development. Instead, following Visa Kurki’s analysis of legal personhood as a cluster concept divided into passive and active incidents, I will argue that some AI systems ought to be granted a limited form of legal personhood, because they meet some of the relevant criteria for active legal personhood, such as the capacity to perform acts-in-the-law. The legal personhood I propose for AI systems is a kind of dependent legal personhood analogous to that granted to corporations. Such a form of legal personhood would not absolve developers and producers from liability for damages (where such liability is applicable), but at the same time, it would not risk unjustly holding producers and developers liable for actions of an AI system.
[1] Mark Coeckelbergh, "Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability." Science and Engineering Ethics, (2020): 2054
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).