LLMs and Artificial General Intelligence, Part IV: Counter-arguments: Searle’s Chinese Room and Its Successors

Adam Morse
11 min readJun 10, 2023

--

Prior Essays:
LLMs and Reasoning, Part I: The Monty Hall Problem
LLMs and Reasoning, Part II: Novel Practical Reasoning Problems
LLMs and Reasoning, Part III: Defining a Programming Problem and Having GPT 4 Solve It

Over the previous three essays, I argued that the responses of current generation LLMs, in particular GPT 4, demonstrate at least rudimentary understanding and reasoning. In turn, I suggested that this represents meaningful progress towards Artificial General Intelligence. For my next three essays, I will focus on prominent arguments that LLMs do not have the ability to understand and reason, and therefore do not represent important steps towards AGI. I find none of these arguments fully persuasive, although I think a few have some bite. I begin by responding to John Searle’s Chinese Room argument and similar recent arguments about LLMs; I will then turn to arguments based on lack of design and Ted Chiang’s “blurry JPEG” argument; and finally I will address arguments based on lack of other capabilities beyond reasoning.

Searle’s Chinese Room

John Searle’s Chinese Room argument1 is the classic, best known argument that electronic computers cannot be intelligent. In particular, Searle argues that an electronic computer that can pass the Turing Test — i.e. can produce text responses to questions that a human cannot distinguish from the text responses produced by another human — would still not be intelligent because it would not actually have any understanding of the queries and responses that it was giving. Instead, it would simply be mindlessly providing appropriate responses to symbols without thought about what those symbols mean. This would provide the appearance, but not the reality, of artificial intelligence. Searle argues that this can be made clear by considering a hypothetical in which a person, who has no knowledge of a Chinese language, receives slips of paper with Chinese writing on them. The person then follows a complicated set of written instructions, referring to a large number of books and reference files, and composes a response, also written in Chinese. The person then passes that response out through a slot. Searle argues that the apparent ability of the person in the box to understand Chinese, as demonstrated by providing an appropriate response in Chinese to a Chinese language query, would be like the apparent ability of a computer to understand and respond to natural language prompts. In both cases, a series of steps are taken according to complex rules, and yet, no actual understanding is involved, according to Searle. Therefore, Searle argues, not only would passing the Turing Test be insufficient to demonstrate artificial intelligence, artificial intelligence is impossible in a computer that applies strict rules, regardless of those rules’ complexity.

Searle’s argument has been highly influential and much discussed, despite the fact that I believe that it is breathtakingly unconvincing. First, he assumes his conclusion. He asserts that we should imagine the existence of a system that can respond apparently intelligently without understanding. But precisely the question is whether a system that can accomplish that feat necessarily has understanding. Imagine a person in a room with poor understanding of a foreign language, but access to various translation materials. If you like, you can assume that this is using a LLM for translation. A question comes in; the person translates it into a language they understand; they use their understanding to construct a response; and then they translate the response back. Surely, the act of translation doesn’t mean that the person in the room lacks understanding or intelligence. Alternately, if the person is taking purely mechanical actions, following an enormously complicated set of instructions to operate a physical, rather than an electronic, computer — why would we not view that mechanical computer as having understanding, except for Searle’s assumption that it does not?

Moreover, I think a simple thought experiment suffices to rebut Searle’s argument. Start with a person who is receiving input, say written questions, and giving output, also in written form. We would all agree that this demonstrates understanding and intelligence. Now, assume that the same person is involved in a dreadful car accident that leaves their body unable to survive, but their brain intact; their brain is removed, put in a jar, and hooked up to input and output systems. Submit the same prompts as before and we receive the same outputs. Clearly, this is understanding in exactly the same way — all of the same “thinking” is happening in the same system. Next, imagine an artificially constructed biological brain. All of the same neurons exist, with the same connections, and the same processes happen within it. But the brain in jar B was made by nanites, to the specifications of the brain in jar A, whereas the brain in jar A was removed from a person following a car accident. Again, when the brain in jar B produces responses that can’t be distinguished from the responses from the brain in jar A, the brain in jar B is also demonstrating understanding and thinking. Now, imagine that instead of making a biological replica of the brain in jar A, an electronic replica is made in code. Every neuron, every synapse, every connection is represented by variables and coding, processed on an electronic computer. When input comes in, the electronic brain follows precisely the same processes as the biological brain, except that instead of neurons firing and chemicals slotting into receptors, virtual representations of those same neurons fire and the electronic simulation changes variables to represent chemicals slotting into receptors. I assert that this virtual brain would obviously be demonstrating understanding and intelligence. And that is sufficient to negate Searle’s argument — if an electronic computer can understand language and respond intelligently, Searle is wrong that his Chinese Room argument demonstrates that rule-based electronic computers cannot demonstrate true artificial intelligence.2

For present purposes, however, I want to go one step forward and present the “hypothetical” of LLMs. Let’s assume that now we build an artificial, computer “brain” that is not identical in structure and processing to a biological brain. It is modeled in some ways off of biological brains, using networks of artificial neurons linked in artificial networks, with each following activation patterns modeled off of those in biological brains. But the patterns that they are linked in, and the ways that they are organized, are not even close to the way that a biological brain is linked and organized. Moreover, the weights of their connections are developed in a very different way from how the weights and connections of a biological brain develop. Does the fact that this artificial brain is organized and built differently mean that it cannot understand and cannot think, while an artificial brain organized along a strict mimicry of human biological brains can? Why should that be true? If we met an alien whose biological brain was organized and operated in very different, though perhaps vaguely similar, ways to our own, but they demonstrated intelligence and understanding, we would not hesitate to view them as having intelligence. The form of a brain does not matter as much as its function. Instead, understanding and intelligence under my analysis should be viewed as emergent properties that some brains — biological or virtual — demonstrate through their ability to actually receive inputs, process those inputs, and then produce outputs that show intelligence and understanding.

Searle’s Successors In Criticizing Use of Words Like “Understand” to Describe LLMs

While Searle’s argument, despite its significance in the philosophical literature, is not very convincing, it is also representative of many arguments that we should not anthropomorphize LLMs. Many critics argue against using terms like “know,” “understand,” or “reason,” when describing what LLMs do.3 They argue that an LLM cannot know anything, because it is just a machine, no more than software, and that when people describe an LLM’s confidence in its answers, they mistake mechanistic output of probabilistically selected tokens for the thought and emotion that a human would have behind the same output. For example, Profs. Emily M. Bender and Alexander Koller argue that LLMs cannot have understanding because, they assert, training on formal input (examples of language usage) without input that contains intrinsic meaning cannot result in understanding.4 As with Searle, they are assuming the conclusion. Bender and Koller give an extensive thought-experiment of a hyper-intelligent octopus that taps into telegraphic communication between humans and uses statistical analysis of their communication to make responses that appear to indicate understanding. They then argue that because the octopus’s interpretations of the human’s communication has no basis in meaning — only mimicking the form of messages — when presented with a query that requires understanding (suggestions for how to make a better coconut catapult), it would inevitably fail. That, of course, is a factual claim, and I believe that the increasing ability of LLMs to demonstrate apparent understanding and reasoning falsifies that claim. To the extent that the octopus is replaced with an LLM, and it can in fact give responses that require understanding, the conclusion should be to reject their assertion that meaning cannot be derived from examples of language without shared actions and perceptions in the world to ground that meaning. In this regard, it is telling that they marshal evidence of GPT 2’s inability to respond in a way that demonstrated understanding and reasoning, even though their argument poses as theoretical truth. And it’s true, GPT 2 demonstrates little evidence of understanding or reasoning. While they use an assumption about what would happen, based on their theoretical understanding of the relationship between language and meaning, their argument ends up looking foolish when its empirical predictions cease to be true.

I agree that an LLM’s understanding and reasoning might seem very strange to us if we could fully understand it. An LLM trained only on language is very much in Plato’s cave, seeing shadows on a wall and trying to form a model of what the reality that casts those shadows is without being able to see the things casting the shadows directly. Its model may be in important ways different from reality. A sufficiently advanced LLM’s model of reality may also be in important ways ultimately better than the imperfect human models of the reality we live in, however, and the imperfections of its model should not make us reject the conception of LLMs “understanding,” “knowing,” “thinking,” and “reasoning” as impossible.

The Tendency to Anthropomorphize

Because they believe that an LLM cannot be intelligent — cannot have actual understanding — they dismiss evidence that would otherwise be suggestive of actual intelligence as anthropomorphizing. To be fair, humans have a tremendous tendency to anthropomorphize and to ascribe intentionality and thought to systems that lack any intelligence. Even early, primitive chatbots like ELIZA caused some people to think they were interacting with a program that demonstrated actual understanding, thought, and empathy — a phenomenon that has been described as the ELIZA effect.5 So to the extent that critics of LLMs argue that we should be careful that we don’t ascribe more capability than the LLMs actually demonstrate — that our tendency to see a face doesn’t anthropomorphize the Moon — their point is well taken. But at the same time, the stronger version — that because humans often anthropomorphize, we should not take seriously any claims that LLMs might demonstrate intelligence or human-like thinking — needs to be justified by actual evidence, not assumption and assertion.

My counterarguments do not demonstrate that LLMs can think. Hypotheticals cannot do that — only the ability of LLMs to demonstrate intelligence through their responses to actual input can do that under my analysis. But I believe they do demonstrate that strong arguments against artificial intelligence like Searle’s, or like the similar statement that LLMs merely take input, run it through a series of computational steps, and identify the most probable output token to come next and therefore cannot demonstrate understanding or intelligence, are not tenable. The proof has to be in actually examining the input and output and seeing if that meets our standards for intelligence, not in assuming that the form of the LLM means that it cannot be intelligent. I believe that the current evidence supports a conclusion that LLMs like GPT 4 have rudimentary understanding and reasoning capabilities — not full Artificial General Intelligence, but the spark of intelligence. And while I acknowledge the risk that I am being led astray by the ELIZA effect — finding intelligence where there is none because humans are predisposed to ascribe human characteristics to entities and forces that lack those characteristics — I believe that closing our eyes to that evidence because of a philosophical commitment to a theoretical argument against understanding or the spark of intelligence in LLMs is a greater error.

1John Searle, “Minds, brains, and programs,” Behavioral and Brain Sciences, 3(3):417–457 (1980). I feel an obligation to note that, in addition to being an influential philosopher, Searle was apparently also a serial sexual predator who was stripped of his emeritus status at UC Berkeley because of his misconduct. I have to engage with his scholarly work, but I don’t want to appear to condone his misconduct.

2Searle uses the Chinese Room thought experiment as part of a larger argument about the causal meaning of thought. I focus on the Chinese Room thought experiment itself, however, because I believe that has had a much stronger impact on thought about artificial intelligence than his broader argument.

3See, e.g., Nnedi Okorafor, tweet at https://twitter.com/Nnedi/status/1667186054779002884 (June 9, 2023) (“Stop calling it artificial intelligence. It’s not intelligent at all./It’s only being MARKETED as that phrase to manipulate people into more easily accepting the more nefarious things they plan to use it for. Pay attention.”); Brad DeLong, tweet at https://twitter.com/delong/status/1652311466815651841 (April 29, 2023) (“You are committing a category mistake: You are interacting with Chat-GPT as if it is conversing with you…); see also Brad DeLong, “Continuing to Worry at þe Chat-GPT4 Issue, as If I Were a Dog, & It an Old Shoe,” March 23, 2023 at https://braddelong.substack.com/p/continuing-to-worry-at-e-chat-gpt4 (An LLM “is, after all, just a Chinese-Speaking Room: manipulating symbols according to rules.”). Prof. DeLong explicitly makes the Searle Chinese Room analogy but also explicitly allows for the possibility that a sufficiently powerful LLM-like system could have actual understanding. He simply denies that current LLMs are close to that threshold: “When you have a neural network 30,000 times as complex as Chat-GPT4 that has been trained by the genetic-survival-and-reproduction algorithm for the equivalent of 500 million years, I give you permission to come knocking at my door.” Id. Many of the statements that LLMs are not “intelligence” or cannot “think” or “understand” are, like Dr. Okorafor’s, more in the way of assertion than actual arguments, so it can be difficult to figure out what grounds those assumptions. However, I think it’s clear from the citations and structures of some of the actual arguments that at least some of these assertions draw on the Chinese Room thought experiment or similar sorts of arguments.

4Emily M. Bender and Alexander Koller, “Climbing Towards NLU: On Meaning, Form, and Understanding in the Age of Data,” Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, available at https://aclanthology.org/2020.acl-main.463.pdf (July 2020). Like Prof. DeLong’s arguments, Profs. Bender and Koller link their arguments explicitly to the tradition of Searle’s Chinese room. While their paper was written before the enormous leaps of the last few years and discusses in part the incapacity of GPT 2 to engage in effective reasoning, Prof. Bender’s continuing statements make it clear that they continue to hold these positions. Emily M. Bender, “I wish we didn’t need to keep reminding people [that “all large language models are good at is predict the next word in a sequence based on previous words they’ve seen. that’s all. there’s no understanding of meaning whatsoever”, quote-tweeting Abeba Birhane’s tweet], and @Abebab is commendable for being gentle about it! For the long form of this argument, see [her paper with Koller],” Tweet, https://twitter.com/emilymbender/status/1598115234530885632 (Nov. 30, 2022).

5Id.

--

--