In October 2017 the humanoid robot known as Sophia, gifted with artificial intelligence, obtained Saudi Arabian citizenship. In May 2018 Google showcased the capabilities of its product Google Duplex, whose AI system can arrange an appointment at the hairdresser’s or reserve a table at a restaurant, while avoiding misunderstandings on the phone and imitating the gap-filling hems and haws of human conversation. Observing the capabilities of these robots, a lawyer’s mind naturally turns to the issue of the potential legal personality of AI.
What is legal personality?
Legal personality, and the capacity to be the subject of rights and obligations and to determine one’s own legal situation, is ascribed by the law to human beings (natural persons). A natural person has self-awareness, intelligence, free will, and feelings. Some natural persons, in light of their immaturity (children) or intellectual disability, have partially or completely limited capacity to autonomously acquire rights and assume obligations, and thus the responsibility for them is borne by legal guardians.
The notion of legal personality in the sense of the capacity to be the subject of rights and obligations and to establish one’s own legal situation has been expanded to cover entities grouping together individuals sharing common interests, such as states and commercial entities. They are “artificial” persons, known as “legal persons,” created by the humans standing behind them. The detachment of legal persons from the natural persons standing behind them (e.g. authorities and entrepreneurs) occurred over a long process, through the evolution of abstract legal concepts.
Legal persons are the subject of rights and obligations and shape their legal environment through the persons managing them (as the legal persons’ authorities). As long as we remain within the spectrum of rights and obligations arising under the civil law, the natural persons standing behind the actions of legal persons generally remain in the shadow. This changes in the case of criminal responsibility, however. Imagine for example a catastrophe blamed on a commercial entity; it cannot be sent to prison. In such case, under Polish law, the sentence of imprisonment is imposed on the natural persons responsible for the action of the legal entity.
Apart from the abstract creatures of legal persons, we might also consider the legal personality of living creatures other than humans. Some animals have self-awareness and complex problem-solving skills, and also experience suffering, fear, pleasure and joy. For example, the issue of the legal personhood of chimpanzees was considered by a New York court in Nonhuman Rights Project, Inc. v Stanley, where a writ of habeas corpus was filed by an NGO seeking the release of Hercules and Leo, two chimps confined in a lab at Stony Brook University.
In its petition the NGO argued that for the institution of habeas corpus, the law does not define the notion of a person. Given the lack of any precedent concerning application of habeas corpus to anyone other than a human, the court decided to consider the issue of application of this institution to a chimpanzee. A brief was filed in the case by amicus curiae arguing that under New York law, legal personality is held by humans and certain public and private entities, but the legal personality vested in non-human entities is justified because they are composed of humans. Thus personhood should not be extended to cover animals.
In its judgment, the court refused to recognise the personhood of chimpanzees because they are not capable of bearing legal responsibility for their actions, and also are not capable of performing obligations. The court also pointed out that it is the capacity to assume rights and obligations, and not the physical resemblance to humans, that is decisive for recognising the legal personality of a being.
Turning to the issue of legal personality of robots, I should point out that the reasoning applies to those equipped with artificial intelligence. AI refers to the following characteristics: the ability to communicate, knowledge of oneself, knowledge of the outside world, the ability to achieve identified aims, and a certain level of creativity. These characteristics and capabilities result from the code, written by humans, that programs or defines the action of AI.
Undoubtedly AI uses cognitive processes to achieve identified aims, but this does not seem sufficient reason to vest it legal personality, in light of the criterion of rights and obligations. Using the example of a commercial entity, awarding it legal personality is justified by the human substratum behind it. As I mentioned, in particular in the context of criminal responsibility, it is the human responsibility for the decision-making processes and organisation of the entity who is punished.
In the context of a robot equipped with AI, it is hard to say that it has a free will which could lead to commission of prohibited acts with the aim of achieving its own ends. Thus it cannot be ascribed a degree of fault, such as negligence or recklessness. Nor is it possible to hold it liable for damages for its errors, for example as in the case of an accident caused by an autonomous car or malpractice by surgical robots.
AI code may ensure that AI complies with certain rules, but application of such rules is not the result of an act of will, and thus cannot give rise to responsibility.
Considering the level of self-awareness, autonomy and self-determination, we may seek an analogy between robots and animals. But what makes people eager to provide legal protection to animals (and attempt to vest them with personhood) is not just the intelligence some of them display, but also their capacity to feel pain, joy or attachment, which AI lacks.
Consequently, what distinguishes human beings is their capacity for understanding the rules governing society, as well as the intention to comply with those rules, along with the ability to feel emotions. A human understands, interprets and applies legal rules in nuanced situations of daily life, which cannot be said of animals or robots.
Thus the rights and obligations associated with possessing legal personality arise from who people are and how the social relations are organised among them. Notions like freedom of expression, moral losses and responsibility make little sense in the context of artificial intelligence. For this reason, I believe that as of now there is no justification for awarding legal personality to robots. Nor is there any justification to award robots the right of ownership or the right to conclude contracts on their own behalf. It is justified rather to treat robots as a product within the context of responsibility for injuries they cause.
The debate over legal personality of robots equipped with AI is not absurd, however, when we hypothesise the development of androids like those appearing in Blade Runner or Westworld. But the debate is inseparable from considerations of the essence of humanity.