Replies and Responses

As with anything, there are various replies, counter-arguments, and opinions in response to Searle’s Chinese Room argument. Searle’s responses to these replies explore the results of the Chinese Room thought experiment even further in order to fully answer his question he began with: “Could a machine think?” (Searle 417). To effectively understand how Searle came to his final conclusion, we should take a look at these replies and Searle’s responses to them.

 

One of the replies argues that since the person is a part of the whole experiment and the communication between the outside people and the inside person is understandable, all parties basically understand. Searle argues back that one can think of it that way, however, the person still does not understand what anything on the pages means. He even explains why this theory that “while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese” is “unplausible to start with” (Searle 419). Searle is saying that there is no point in saying that someone the person understands or the system understands in some way because no matter what the rule book or program says, the person still does not understand Chinese. Searle also goes on to say that, essentially, there are two versions of the person in the locked room: one version that doesn’t understand and version two that understands by using the manipulation system. This further explains his response to the reply. While version two understands, that is only to an outsider’s point of view and only possible by the manipulation system: the person does not actually understand it himself.

 

Another reply replaces the person in the locked room with a robot and argues that instead of programming said robot to take and give symbols, it would be programmed to do anything a human would do. It would even have ways to see and act similar to a human, thus, having “genuine understanding and other mental states” (Searle 420). Searle argues that, regardless, all the person/machine in the locked room and the robot are doing is “follow[ing] formal instructions about manipulating formal symbols” (Searle 420) and that because the robot is programmed to do what it does, the robot does not have other mental states nor does it have any intentional states. Although the robot would be doing considered intentional actions, perceiving, understanding, and more, it is only doing these things by following the program, not because the robot itself is choosing to do so.

 

A third reply argues that if you were to program a computer “with all the synapses of the human brain… the whole behavior of the robot is indistinguishable from human behavior” (Searle 421) it would be a unified system. Searle responds to this reply by comparing an animal with a robot. In short, he says that animals are the same as us not only because we both have eyes, ears, a nose and so on, but also because we both have mental states, intentions, and consciousness. Sure we can assume the same with robots until we have a reason to question it, which we do. The fact that anything a robot does is based on and rooted in the program written by a human cancels out any possible way for a robot to have a mental state or anything else that makes it its own. 

 

What all of these replies and all of Searle’s responses come down to is the fact that robots and machines can duplicate humans, but can’t simulate human life. A crucial point Searle makes is that while AI can exist without having a cognitive state, human life can not. While a machine could do anything and everything a human does, there is still a big difference between machine and human. Machines will never be able to think on their own or have intentionality by themselves as long as they are running on man-made computer programs.