Philosopher William Lycan thinks that machines can be conscious. To demonstrate this idea, he uses the thought-experiment about “Harry”, a machine that passes the Turing test. The Turing test, which assesses intelligence in machines, does not yet assess consciousness. In this paper, I will present Lycan’s “Harry” example and then mention philosopher Colin McGinn’s view in contrast. Then, I will present Lycan’s “Henrietta example. Finally, I will assess both views and offer my own opinions on the topic.
Harry is an advanced computer that is able to do anything a human can do, and in the same way — but does this mean Harry has the property of consciousness? Lycan explains that despite having the same appearance and behaviors, Harry would need thoughts and feelings in order to be considered a person. For Lycan, satisfying behavioral criteria is not enough to satisfy the criterion for a conscious being. He posits that an intelligent creature is one that is information sensitive: they register, store, manage, and use information. Despite his eventual opposition to the following statement, Lycan notes that the computer must be fed information and the “appropriateness and effectiveness” of the output can only depend on whatever the programmer intended for the machine to do.
McGinn, on the other hand, believes that we cannot assess consciousness in machines until we have a solid definition of what consciousness is. Even if a computer can run a program made up of a symbol-manipulating algorithm, it still doesn’t understand the meaning of the algorithm or why it is carrying it out. According to McGinn, those who support the idea of a mind in robots view the mind as a cerebral computer. This idea exemplifies a false understanding of mind, which happens to be something that responds to meaning and not just to syntax or symbols.
McGinn agrees with Lycan that “acting is not sufficient for being” with regard to the Turing test. In his book The Mysterious Flame: Conscious Minds in a Material World, McGinn critiques the Turing Test because it “‘operationalizes’ the question of machine consciousness by formulating it as a question about the activities of consciousness, not a question about its secret essence” (p. 187). For example, it is possible that a human could not pass the Turing test (like a conscious infant). Even animals, who exhibit minimal consciousness, could not pass the Turing test. McGinn believes that the test displays insufficient criterion and does not provide a necessary condition for consciousness (p. 188).
Lycan uses the “Henrietta” example to view machine consciousness from a different perspective. Imagine Henrietta, a normal human whose body is gradually replaced with synthetic materials until all of her physical elements are inorganic. Now, imagine that her personality, perception, and poetic abilities all remain the same — did Henrietta lose consciousness during the switchover? Lycan’s second example raises the question of whether consciousness is restricted to organic materials and processes, or if it could be separated from the brain at any point during her operation. Perhaps Henrietta lost organic consciousness, but it was altered due to the change in her physical condition.
Although both philosophers brought up many intriguing and useful points, I tend to agree more with Lycan’s view of the potentiality of consciousness in machines. I am not religious, but my belief is that we are all intentionally created by the intelligent process itself (and only God knows what that is). I also believe that in his article, McGinn talked about computers the way that slave-owners would have talked about slaves back in the day (examining why slaves are slaves and why they shouldn’t be treated like us). Do humans even stop and choose freely? Or do we only carry out internal “programs” that we do not know are controlling our behavior? We cannot stop and point to an inner self, so we should not expect machines, if conscious, to do so.
If there is a form of consciousness, then it should be understood by metaphor. For example, perhaps consciousness is the name we give to the awareness of information that is being processed in an information processor. Perhaps the type of information is relative to the processor itself, and machines will someday have something like thoughts and feelings — but not in the same way we experience them. As for emotions, perhaps they are necessary for us to have judgment, goals, purposes, and a sense of what is appropriate or relevant. Both Lycan and McGinn believe that computers do not have these elements. At this moment in time, emotions separate humans and machine. In the future, we might need to work together with conscious machines in order to teach them what we know.
McGinn suggested that a computer only operates on the symbol itself, and not what it means. My objection to this statement would be that the meaning of their symbol of use is only clear to whomever programmed it. Perhaps machines will create meaning relative to their own experience of these symbols. Human beings learn by experience and association. We may not always accurately represent the world, but somehow it works for us. So, we should not expect conscious machines to be able to do things that we cannot even do.
These views all rest on current understandings of consciousness, which are only theories and not facts. McGinn posits that a robot could not be conscious in virtue of the fact that it runs computer programs, because this is the wrong property for bringing conscious minds into existence. But why must this be the case? What if the program is so complex that it needs something like consciousness in order for it to continue running? There are infinite scenarios that we can imagine, and it is up to intelligent and creative people to prepare for future possibilities — something that humans were designed to do.
Lycan, William G. “Appendix: Machine Consciousness.” Consciousness. Cambridge, MA: MIT, 1995. 123-30. Print.
McGinn, Colin. “Could a Robot Get the Blues?” The Mysterious Flame: Conscious Minds in a Material World. New York, NY: Basic, 2000. 175-203. Print.