Thursday, September 13, 2012

Chinese Room



I am a huge sci-fi fan and love exploring the idea of how far technology will be able to advance. This is one factor why I enjoyed reading about John Searle’s thought experiment. It also not only incorporates computer science theories, but also some psychological aspects and a hint of religion could be considered. (even though religion was never mentioned in his experiment)

            His thought experiment was set up to where a person was locked in a room with only a filing cabinet, a lot of paper, pencils, and erasers. This individual would be given a manual in English, which is a duplicate of the computer program. The human, who only knows English, is given Chinese characters through a slit in a door. Using the computer program, it is assumed that the human can then reply to the input with his own Chinese sentence; thus, being able to hold a conversation in Chinese. However, the human is only following directions and does not actually understand any Chinese. This process simulates what is going on in the computer, which emphasizes the point that computers don’t actually have a mind, but that they are only simulating one. This defines “weak AI” whereas actually having a mind would be a “strong AI”.
            
           The debate over if a “strong AI” is possible has been going on for a long time. The supporters claim that humans are just machines who respond a certain way solely because of the input received. Let’s say that this is true for a moment. Then, we would have to discover every single complex nuance that is stored and calculated in the human brain. This does not currently seem very possible because of how little we actually know about the brain. In addition to that, you would have to quantify every situational input and take it into account. That’s crazy! Do you know how much we perceive in just a glance? But then again, if we go a few years back and describe an iPhone to people, they would think we are just as crazy. Even if a program like this could be created, it has been argued that strong AI is only possible if the computer has intentionality, but what is intentionality? There are so many terms that need to be defined before a discussion begins. These may include “strong AI”, “weak AI”, “intentionality”, “to understand something”, “to know something”, “the mind”, and “brain”. Sadly, Searle did not incorporate these definitions in his paper.  Realistically, if you witnessed a fluent conversation in Chinese, you would perceive them to know Chinese regardless if they actually knew it or not. This is the same with computers. However, a student could also be talking to a professor in way that makes the professor think that the student knows the material, but in actuality they are completely lost. So, we can see that it goes both ways.
           
             In concordance with Searle’s belief, accepting the possibility of strong AIs imply that humans are just the same as computers – machines without a soul. I do not like this stance, but I don’t think that an answer will ever be agreed upon because it is just one of those topics that can never be definitely proven. I good thing about it is that it gets people to start thinking; thinking about computer science, philosophy, and humanity. I will take that as a success.

No comments:

Post a Comment