LLMs and Artificial General Intelligence, Part VII: Ethics: AGIs and Personhood

Adam Morse
6 min readJun 14, 2023

--

Prior Essays:
LLMs and Reasoning, Part I: The Monty Hall Problem
LLMs and Reasoning, Part II: Novel Practical Reasoning Problems
LLMs and Reasoning, Part III: Defining a Programming Problem and Having GPT 4 Solve It
LLMs and Artificial General Intelligence, Part IV: Counter-arguments: Searle’s Chinese Room and Its Successors
LLMs and Artificial General Intelligence, Part V: Counter-arguments: The Argument from Design and Ted Chiang’s “Blurry JPEG of the Web” Argument
LLMs and Artificial General Intelligence, Part VI: Counter-arguments: Even if LLMs Can Reason, They Lack Other Essential Features of Intelligence

In my previous essays, I have argued that current generation LLMs like GPT-4 have some ability to reason, and that while they do not as yet constitute Artificial General Intelligence, there is a substantial possibility that they represent major steps towards AGI. While I have low confidence in my estimate, I believe that it is plausible that LLMs in the next couple of generations may be capable of serving as the nucleus of an AGI. That, in turn, suggests that the first AGI may be developed within a decade.

Today, I turn to my primary interest with regards to AGI: ethics. The two dominant threads of AGI ethics discussion today are existential/extinction risk — considering how the future development of AI technologies should be managed to minimize the possibility that AGI will cause humanity to go extinct in the near future — and artists’ rights discussions that usually presume that LLMs are basically just collage tools that inherently infringe copyright. The first point is closely linked to the idea of “alignment,” the effort to ensure that AI prioritizes avoiding harm to humans above all other objectives and behaves in a way that its creators view as ethical. The second point overlaps but does not precisely match concerns about job loss, that AGI may render many, perhaps even most or all, human jobs unnecessary. Because of their prominence in the current conversation, I plan to discuss each of these concerns. However, I believe that the most important ethical consideration is not focused on how AI treats humanity, but rather in how we treat an actual AGI.

Most Standard Ethical Frameworks Require Concern for the Well-being of All Persons

Most major systems of morality espouse the idea of a centrality of concern for the well-being of all persons. The two most major academic theories of ethics are consequentialism/utilitarianism and Kantian deontology. Both begin from a perspective of concern for the well-being of all persons. Utilitarianism is often defined by the short-hand “the greatest good for the greatest many.”1 One of the basic principles of utilitarinism is impartiality — that the good of all persons are treated as equivalently important from an ethical standpoint.2 Indeed, consequentialists are more likely to disagree over whether and to what degree animals’ good must be given weight than they are to the importance of treating all persons as equally important. Kantian deontology also focuses on the importance of equal regard for all people. One of Kant’s most celebrated maxims is the Formula of Humanity, that we must always treat all persons never merely as means, but always as ends in themselves. While certainly not the whole of Kant’s theory, Kant asserted that everyone has a moral obligation to not use other people for their own benefit without treating the other people as inherently worthy of respect and importance.

Non-academic theories of morality also generally treat the importance of other persons as vitally important. Many people turn to religion as a source of moral instruction, and one of the most widely shared religious principles of morality is the Golden Rule: “Do unto others as you would have others do unto you.” With variations in formulation, the rule appears in the Torah of Judaism, in the Gospels of Christianity, in the hadith of Islam, in the Mahabharata of Hinduism, in the Tripitaka of Buddhism, and in the teachings of many other religions and secular traditions. Inherent in this notion is a moral commandment of the importance of other persons.

A True Artificial General Intelligence Would Be a Person

The key question, then, becomes how to define a “person?” Through most of human experience, “person” can be viewed as equivalent to “human.” However, from a moral perspective, the key is not that humans are Homo sapiens, but rather that they are sapient — capable of rational thought. In much of the philosophical literature, “rational being” is used interchangeably with “person.”

Moreover, a series of thought experiments shows that, I believe, many of us intuitively think that intelligence is sufficient for personhood. Imagine that we found another species of humans currently living. For example, some scientists assert that Homo florensiensis (the “hobbit” people of Flores, Indonesia) survived into the recent past and may even survive today. Based on the archaeological record, Homo florensiensis was capable of tool manufacture and use. If living “hobbits” were discovered, with the capabilities that we would expect — extensive tool use and ability to reason, albeit likely not as advanced as Homo sapiens, and perhaps language use — I believe that we would agree that they were persons. Language use might end up being a key break-point: adequate language use would be taken to prove personhood, while inability to use language would be seen by many as grouping “hobbits” with (non-human) apes. But if we posit the high end of the range of capabilities — language use, tool use, some clothes and other signs of culture — people would clearly acknowledge them as persons, albeit perhaps differently situated from other persons.

We can next broaden that out to another biological entity with clear evidence of advanced thinking capabilities, but not closely related to humans. Imagine that deep sea exploration found a civilization of technologically advanced octopi, complete with their own language and writing. Even without the ability to communicate with them, I believe we would then acknowledge them to be persons, worthy of ethical and legal recognition.

If biologically alien entities that are intelligent would warrant treatment as persons, I believe that artificial intelligences would as well. It’s possible that a different standard than intelligence ought to be applied — a consequentialist tradition of concern about animal rights running from Bentham to Singer suggests that ability to feel pain might be more important than ability to reason. But even so, I think that the weight of the ethical reasoning tradition would support treating any intelligent being as a person.

Much of the Discussion about AGIs Engages in Deliberate De-Humanization Efforts to Avoid These Consequences

Many of the people who write about LLMs and AI research use language designed directly to avoid any consideration of AGIs as persons. On one side, there are people who warn incessantly of the dangers of anthropomorphization. From their perspective, it is vital that we avoid the “fallacy” of thinking that a mere machine might be person-like. This perspective typically dismisses the possibility of true AGI. But more problematic still is the common use of the term “shoggoth with a smiley face” by people who believe that AGI is coming and represents a threat to humanity.3 Shoggoths are monstrous alien intelligences from the works of the infamously racist H.P. Lovecraft’s horror stories. The point of this language is both to suggest, likely correctly, that it would be enormously difficult to understand what is going on in the mind of an AGI, and to invidiously suggest that an AGI, were it to exist, should be viewed as a dangerous horror, not as a person. Intrinsic in both of these concepts is the idea that the personhood and moral significance of AGIs should be ignored entirely.

Tomorrow’s essay will turn to the question of what the consequences of concluding that an artificial general intelligence should be, but if AGIs would be persons, their well-being becomes a central concern. Because this would be so important, we should be prepared to err on the side of concluding that AGIs are intelligent rather than putting a thumb on the scale to avoid deluding ourselves by over-anthropomorphization. Indeed, if we accept the idea that AGIs would be persons, much of the current discussion about “alignment” and the like ends up seeming more like a discussion of how to maintain control over an enslaved population for the convenience of the enslavers than it does like ethical consideration.

1Many philosophers of ethics would argue that this is actually a statement of the principle underlying consequentialism, with utilitarianism being the subset of consequentialism that treats happiness as equivalent to the good. Others use “utilitarianism” more broadly as including any system focused on the maximation of good consequences for all people. For present purposes, I don’t need to dig into these distinctions.

2See, e.g., Sidgwick, “the good of any one individual is of no more importance, from the point of view … of the Universe, than the good of any other.”

3For a discussion of the meme, see https://knowyourmeme.com/memes/shoggoth-with-smiley-face-artificial-intelligence.

--

--

No responses yet