Algorithmic entities refer to autonomous algorithms that operate without human control or interference. Recently, attention is being given to the idea of algorithmic entities being granted (partial or full) legal personhood. Professor Shawn Bayern and Professor Lynn M. LoPucki popularized through their papers the idea of having algorithmic entities that obtain legal personhood and the accompanying rights and obligations.
Academics and politicians have been discussing over the last few years whether it is possible to have a legal algorithmic entity, meaning that an algorithm or AI is granted legal personhood. In most countries, the law only recognizes natural or real persons and legal persons. The main argument is that behind every legal person (or layers of legal persons), there is eventually a natural person.
In some countries there have been made some exceptions to this in the form of the granting of an environmental personhood to rivers, waterfalls, forests and mountains. In the past, some form of personhood also existed for certain religious constructions such as churches and temples.
Certain countries – albeit for publicity purposes – have shown willingness to grant (some form of) legal personhood to robots. On the 27th of October 2017, Saudi Arabia became to first country in the world to grant citizenship to a robot when it gave “Sophia” a passport. In the same year, official residency status was granted to a chatbot named “Shibuya Mirai” in Tokyo, Japan.
The general consensus is that AI in any case cannot be regarded as a natural or real person and that granting AI (legal) personhood at this stage is unwanted from a societal point of view. However, the academic and public discussions continue as AI software becomes more sophisticated and companies are increasingly implementing artificial intelligence to assist in all aspects of business and society. This leads to some scholars to wonder whether AI should be granted legal personhood as it is not unthinkable to one day have a sophisticated algorithm capable of managing a firm completely independent of human interventions.
Brown argues that the question of whether legal personhood for AI may be granted is tied directly to the issue of whether AI can or should even be allowed to legally own property.  Brown “concludes that that legal personhood is the best approach for AI to own personal property.”  This is an especially important inquiry since many scholars already recognize AI as having possession and control of some digital assets or even data. AI can also create written text, photo, art, and even algorithms, though ownership of these works is not currently granted to AI in any country because it is not recognized as a legal person.
Bayern (2016) argues that this is already possible currently under US law. He states, that, in the United States, creating an AI controlled firm without human interference or ownership is already possible under current legislation by creating a “zero member LLC”:
(1) an individual member creates a member-managed LLC, filing the appropriate paperwork with the state; (2) the individual (along, possibly, with the LLC, which is controlled by the sole member) enters into an operating agreement governing the conduct of the LLC; (3) the operating agreement specifies that the LLC will take actions as determined by an autonomous system, specifying terms or conditions as appropriate to achieve the autonomous system’s legal goals; (4) the sole member withdraws from the LLC, leaving the LLC without any members. The result is potentially a perpetual LLC—a new legal person—that requires no ongoing intervention from any preexisting legal person in order to maintain its status.
Sherer (2018) argues – after conducting an analysis on New York's (and other states’) LLC law(s), the Revised Uniform Limited Liability Company Act (RULLCA) and US case law on fundamentals of legal personhood – that this option is not viable, but agrees with Bayern on the existence of a ‘loophole’ whereby an AI system could “effectively control a LLC and thereby have the functional equivalent of legal personhood”. Bayern's loophole of “entity cross-ownership” would work as follows:
(1) Existing person P establishes member-managed LLCs A and B, with identical operating agreements both providing that the entity is controlled by an autonomous system that is not a preexisting legal person; (2) P causes A to be admitted as a member of B and B to be admitted as a member of A; (3) P withdraws from both entities.
Unlike the zero member LLC, the entity cross-ownership would not trigger a response by the law for having a memberless entity as what remains are two entities each having one member. In corporations, this sort of situations is often prevented by formal provisions in the statutes (predominantly for voting rights for shares), however, such limitations do not seem to be in place for LLCs as they are more flexible in arranging control and organization.
In Europe, certain academics from different countries have started to look at the possibilities in their respective jurisdictions. Bayern et al. (2017) compared the UK, Germany and Switzerland to the findings of Bayern (2016) earlier for the US to see whether such “loopholes” in the law exist there as well to set up an algorithmic entity.
Some smaller jurisdiction are going further and adapting their laws for the 21st century technological changes. Guernsey has granted (limited) rights to electronic agents and Malta is currently busy creating a robot citizenship test.
While it is unlikely the EU would allow for AI to receive legal personality at this moment, the European Parliament did however request the European Commission in a February 2017 resolution to “creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently”.
Not all parts of the supranational European bodies agreed as the European Economic and Social Committee gave in its own initiative an opposing opinion given May 2017: “The EESC is opposed to any form of legal status for robots or AI (systems), as this entails an unacceptable risk of moral hazard. Liability law is based on a preventive, behavior-correcting function, which may disappear as soon as the maker no longer bears the liability risk since this is transferred to the robot (or the AI system). There is also a risk of inappropriate use and abuse of this kind of legal status.”
In reaction to the European Parliament's request, the European Commission set up a High Level Expert Group to tackle issues and take initiative in a number of subjects relating to automation, robotics and AI. The High Level Expert Group released a draft document for AI ethical guidelines and a document defining AI in December 2018. The document on ethical guidelines was opened for consultation and received extensive feedback. The European Commission is taking a careful approach legislating AI by emphasizing on ethics, but at the same time – as the EU is behind in AI research to the United States and China – focusing on how to narrow the gap with competitors by creating a more inviting regulatory framework for AI research and development. Giving (limited) legal personality to AI or even allow certain forms of algorithmic entities might create an extra edge.
Edited: 2021-06-18 18:09:16