Skip to content

Posts from the ‘Chinese Room’ Category

The Chinese Room Explanation

My favorite recommendation explanation, a real honest-to-goodness computer-generated recommendation, is the one pictured below. I was amazed when I received it even though I had a hand in designing and developing the system that generated the recommendation—the system had truly become complex and unpredictable enough to spark a delighted surprise in its creators!

The recommended item of content in the screen grab is about AI and the explanation engine whimsically mentions as an aside that it has been pondering the Chinese Room Argument—the explanation engine’s little joke being that, as AI aficionados know, the Chinese Room Argument is philosopher John Searle’s famous thought experiment that suggests computer programs can never really understand anything in the sense that we humans do. The argument is that all computers can ever do, even in theory, is just operate by rote and that computers will therefore always be relegated to the realm of zombies, albeit increasingly clever zombies.

The Chinese room thought experiment goes like this: you are placed in a room and questions in Chinese are transmitted to you. You don’t know Chinese at all—the symbols are just so many squiggles to you. But in the room is a very big book of instructions that enables you to select and arrange a set of response squiggles for every set of question squiggles you might receive. To the Chinese-speaking person outside the room who receives your answers, you appear to fully understand Chinese. Of course, you don’t understand Chinese at all—all you are doing is executing a mind-numbing series of rules and look-ups. So, even though in theory you could pass the test posed by that other famous AI thought experiment, the Turing test, you don’t really understand Chinese. And this seems to imply that since, most fundamentally, all computers really ever do is execute look-ups and shuffle around squiggles in prescribed ways, no matter how complex this shuffling is it can never really amount to truly understanding anything.

That’s all there is to it—the Chinese Room Argument is remarkably simple. But it is infuriatingly slippery with regard to what it actually tells us, if anything. It has been endlessly debated in computer science and philosophy circles for decades now. Some have asserted that it proves computers can really never truly understand anything in the sense that we do, and certainly could not ever be conscious—because, after all, it is intuitively obvious that manipulations of mere symbols, no matter how sophisticated, are not sufficient for true understanding. The counterarguments are many. Perhaps the most popular is the “systems reply”—which posits that yes, it is true that you don’t understand Chinese, but the room as a whole, including you and the book of response instructions, understands Chinese.

All of these counterarguments result from taking Searle’s bait that the Chinese Room Argument somehow proves that there is a limit on what computer programs can do. What seems most clear is that these armchair arguments and counterarguments can’t actually prove anything. In fact, the only thing the Chinese Room really seems to assure us of is that there is no apparent limit on how smart we can make our zombie systems. That is, we can build systems that are impressively clever using brute-force programming approaches—even more clever than us, at least in specific domains. Deep Blue beating Gary Kasparov is a good example of that.

As I have implied previously, understanding, as we humans appreciate it, is a product of our own, unique-in-the-natural-world explanation engine, not our powerful but inarticulate, unconscious, underlying neural network-based zombie system. The Chinese room doesn’t directly address explanation engines, or any capacity for learning, for that matter. There is no capacity in the thought experiment for recursion, self-inception, and for the system self-modification that occurs, for example, during our sleep. These are the capabilities core to our capacity for learning, understanding, creativity, and yes, even whimsy. We will have to search beyond our Chinese room to understand the machine-based art-of-the-possible for them.

And we will surely continue to search. We know we can build arbitrarily intelligent systems; but that is not enough for us to deign attributing true understanding to them. This conceit of our own explanation engines is summed up by the old adage that you only truly understand something if you can teach it to others—that is, only if you can thoroughly explain it. Plainly, for us, true understanding is in the explaining, not just the doing.