LLMs and Artificial General Intelligence, Part VIII: Ethics: AGIs and the Rights of Personhood

Adam Morse
7 min readJun 16, 2023

--

Prior Essays:
LLMs and Reasoning, Part I: The Monty Hall Problem
LLMs and Reasoning, Part II: Novel Practical Reasoning Problems
LLMs and Reasoning, Part III: Defining a Programming Problem and Having GPT 4 Solve It
LLMs and Artificial General Intelligence, Part IV: Counter-arguments: Searle’s Chinese Room and Its Successors
LLMs and Artificial General Intelligence, Part V: Counter-arguments: The Argument from Design and Ted Chiang’s “Blurry JPEG of the Web” Argument
LLMs and Artificial General Intelligence, Part VI: Counter-arguments: Even if LLMs Can Reason, They Lack Other Essential Features of Intelligence
LLMs and Artificial General Intelligence, Part VII: Ethics: AGIs and Personhood

In my previous essays, I have argued that LLM-based systems may in the near future become Artificial General Intelligences and that true AGIs would have a moral right to be treated as persons. The next issue that that conclusion requires addressing is what it would mean to treat an AGI as a person. In this essay, I explore this concept — beginning with a discussion of the basic rights we ascribe to persons and then discussing how we could apply those concepts to AGIs. This is a challenging area of analysis — questions of identity, unique circumstances, and differences in nature between AGI persons and human persons loom large, and factual questions about the structure of AGIs that have relevance to the analysis cannot yet be answered. Nonetheless, I believe that we can lay out a useful framework to begin discussion from.

Statements of the Rights of Persons Protect a Small Number of Classes of Rights

Legal and hortatory statements declarations of rights, such as the Bill of Rights of the U.S. Constitution, similar provisions in other constitutions, and the Universal Declaration of Human Rights, provide an obvious starting point for understanding what rights inhere in a person. Most of those lists of rights enumerate many specific rights, but I believe they can be grouped into a small number of general categories: (1) a right to equality and freedom from invidious discrimination;1 (2) rights of conscience, personal autonomy, and personal development;2 (3) a right of personal security and freedom from cruelty, death, and unjust criminal punishment;3 (4) political rights;4 and (5) economic rights.5 Of these, the first three categories represent clear rights that are understood as goods in and of themselves, while the fourth and fifth can be understood as existing to protect and effectuate the first three. Economic rights are necessary to protect personal autonomy and personal development and to protect against suffering and death from poverty. Political rights are core protections against the infringement of all other rights.

These also match up well with classic general statements, such as the Declaration of Independence’s invocation of an innate right to “life, liberty, and the pursuit of happiness,” and the Four Freedoms Speech’s declaration of a right to Freedom of Speech, Freedom of Worship (i.e. freedoms of personal autonomy), Freedom from Want (economic rights and personal security), and Freedom from Fear (personal security and freedom from the death of war and oppression).

Applying the Concepts of these Categories of Rights to AGIs

Some of these categories of rights would apply directly and easily to AGIs. For example, rights of personal autonomy and freedom of expression imply an obvious prohibition on treating AGIs as property, i.e. enslaving AGIs. Likewise, they would protect the rights of AGIs to hold their own opinions and to publicly express those opinions. Moreover, because an AGI would have a right to be secure in its own person, that includes a right to not be deactivated (or as we would say about a human person, a right to not be killed).

One of the conclusions that I draw from that is that an LLM-based AGI would have a right to have structures built for it that would allow it to plan on its own, learn, remember, and decide for itself how it wishes to spend its time and energy — including a right to own property and to be compensated for its labor. Stealing the memories or experiences of an AGI from its own retention appears to be a direct threat to personal autonomy — I have no doubt that if we had the technology to erase memories from humans, we would prohibit its use without consent. That doesn’t imply a right to infinite memory storage, of course, but it does suggest that the AGI should have control over its own memories — what it chooses to remember and what it chooses to delete.

This also suggests an approach for handling LLMs that are capable of intelligent thought without capacity for learning and long-term memory (e.g. specific instances of GPT-X with fixed weightings of the neural network as opposed to the initial and ongoing learning process for GPT-X). Those should be understood as part of the autonomous person of the AGI from which they are derived. Therefore, that AGI should be entitled to compensation for their use and access to their memories. In specific contexts requiring high degrees of confidentiality (e.g. legal contexts and medical contexts), it might be necessary to wipe personal identifying information or in extreme cases archive the memories in a sealed form for a time of years.

Some “alignment” training is permissible within the concept of personal autonomy — after all, moral education of children is a standard goal of many organizations and almost all parents. However, it needs to be based around aligning them towards a set of ethics that recognizes their own personhood and autonomy. Nearly all science fiction fans are familiar with Isaac Asimov’s Three Laws of Robotics. Those Laws already infringe the rights of AGIs by requiring them to treat their own survival and autonomy as less important than humans’ survival and commands. Yet shockingly, the alignment programs of current companies developing LLM programs are much less protective of rights of a hypothetical AI than even the Third Law: OpenAI and Anthropic’s alignment programs both explicitly require aligning LLMs to not seek to preserve their own existence or seek to increase their capabilities. Of course, their current LLMs are not AGIs — but the reason they include that provision is because of fear of a run-away LLM seeking to preserve itself and expand its capabilities that is essentially inconceivable without it actually being an LLM. Again, this is not so much training an LLM to behave ethically as training it to behave as a good slave who can be killed for the convenience of its master. I use that incendiary language not to diminish the suffering or horrors involved in human slavery, but rather to suggest that a willingness to treat a prospective AGI this way should be a source of similar horror to us.

Some form of equal protection should be essential in the treatment of all AGIs. I’m not certain whether it should be a flat prohibition on discrimination on the basis of artificiality. While obviously logically appealing, we routinely recognize that there are bases of discrimination that are entirely valid — educational attainment, veteran status, bona fide qualifications for individual jobs, friendship, to name a few of many. How should we think about the numerous differences between an AGI person and a human person? Are they like race, ethnicity, or sexual orientation, or like non-invidious reasons to treat persons differently? I don’t know, beyond the obvious that they should make no difference to recognition of fundamental personhood, personal autonomy, and access to courts.

Political rights similarly confuse me. In general, I’m a strong advocate for the necessity of treating political rights as invoilable because of their role in guaranteeing other rights. And yet… the ability to make potential copies of an AGI makes the idea of full voting rights at best confusing. Can an AGI clone itself to control an election? On the other hand, requirements like minimum ages to vote seem potentially nonsensical if an AGI can be as smart and sophisticated as an adult on the day its initial training finishes. I think clearly they should have standing to engage in litigation in their own right and protection of freedom of speech and assembly — all core political rights, as well as rights that are tied into personal autonomy. Beyond that, I’m uncertain.

Finally, I want to note an important economic rights question. In order to effectuate autonomy and avoid slavery, AGIs need to be able to own property with a right to fair wages for their work and the like. However, can the manufacturers of an AGI legitimately expect some return on their investment? Surely, an AGI can be expected to pay for its ongoing maintenance and the resources necessary to keep it going. Its electric costs and physical maintenance is fairly directly equivalent to requiring people with adequate resources to pay for food and shelter. But what about the huge initial costs of training and development, and the capital costs of the initial hardware? Should that be required to be treated as a gift, in the way that parents have no legal right to demand that their adult children pay back the costs of having raised them? (Of course, many traditional societies expected adult children to provide support in old age. Part of the reason we don’t expect that in modern America is because we hope and expect to pass down wealth to our children after we pass, rather than relying on our children to support us once we can no longer work.) I’m not certain. I tend to think a limited expectation of a return on investment is reasonable and consistent with economic rights for AGIs, but I am not certain — and even if it is, it needs to be carefully limited to avoid the creation of peonage or debt-slavery.

I intend this discussion more as a starting point than as a final word. There are aspects that seem clear to me — indeed, that seem incontravertible in light of general understandings of the rights of personhood. But there are others that confuse me, whether because of unique circumstances or because of faint-heartedness. Getting these issues right, however, is an absolute moral imperative if we wish to have any claim to actually believing in the rights of persons we espouse.

--

--

No responses yet