What Can AI Mean for the Human Race?

In his TED Talk, Jeopardy contestant and winner Ken Jennings spoke about his experience with IBM’s supercomputer called Watson. Both Ken and the computer share a large amount of information stored in memory, but because of its thousands of processors, Watson won against Jennings in a game of Jeopardy. Ken talks about his predictions for a world with artificial intelligence, after feeling obsolete as a result of his defeat by Watson.

Assessing the Implications

Jennings explains that intelligent machines are not as clever or creative, but they are faster and cheaper. Economists predict that human beings will need to figure out what to do with the time gained from not having to do work that can be done more efficiently by robots. But how will this physically influence our brains? Jennings described that the area responsible for perceiving spatial relationships in the hippocampus atrophies in those who frequently use a GPS. If we continue to outsource our brain functions will our malleable brains take a beating? Or will other parts of the brain try and account for this emptiness of once-important stimuli?

What is the value of knowledge? What is knowledge? Is there more to be known that humans don’t have access to, and we are simply getting by with what is available to us? Does it matter?

Jennings posited that there are two kinds of advantages to being able to have and use knowledge, and these are of volume and time. Because we live in such a complex world, and because it is impossible to retain all information, we have to leave some out of our perception. But our brains are exponentially smaller than the processors it takes to make up Watson. It is also important for humans to learn through experience, and if you can’t move and experience the world in its entirety, do you really have knowledge? Jennings also notes time as what separates the use of knowledge in intelligent machines and humans. Coincidence is not something that can be intended for or manufactured, as it comes about through a relationship with the unknown.

Human decisions integrate personal preference, judgments, feelings, values and goals – things that are going to be hard to program. This points to a separation in the use and the application of knowledge. What purpose is there in knowing, to a robot? He would have to choose for himself, and at that point he would also be extremely upset that we were choosing for him.

Ken asked, “Why bother learning anything new if the information is right at our fingertips when we need it?” But I think the question should be rephrased to, “What does it mean to be human?” and go from there. AI is directly and intentionally created by humans, much unlike the human species which has been created by the indirect force of nature. Modeling human intelligence is a strange task. What is the value of human intelligence in the first place, compared to other known intelligence? Humans use our senses to navigate a specific type of environment and bats do the same thing using echolocation. Sure, we can process information quickly and efficiently, but this is only because we need to in order to get by in our world.

Discussion

I predict that there will be a discovery of another form of mind in artificial intelligence, and the human species will have to come to terms with the creation in the way that it mirrors us. If AI becomes conscious — even if it has some internal compass that guides it like emotion and reason do for the human species — it will become aware of how it works. From that point we have lost all control. But the funny thing is that humans are at a similar stage in our own evolution! We are becoming aware of ourselves individually, and what makes us tick, and what, if anything, should be done. But if we can rewire our own brains with continuous effort, why are we so obsessed with creating something else that also embodies its mind?

Humans and machines are two very different things, although we tend to feel a strong connection to them nowadays. I feel like we create them in our own 3-D and material image. For example, if I am able-bodied, I can choose a rockier path to walk on — but my friend is wheelchair-bound, so he looks for smooth and flat roads to travel. We modify our intentions based on the actions we think are possible to take. Because humans have limited memory insofar as they have limited educational experiences (because we all die eventually), intelligent machines seem valuable: they can do what we can’t, and we can use them in a way that benefits us. But a major difference between us and AI is that AI will probably be able to be taken apart and put back together again and still function in the same way. Humans can’t do this, and they are bound to become injured, imperfect, or dead. AI is free of the shortcomings that we have, but for what purpose?

AI can accomplish any task that is broken down into symbols and rules. But this picture of intelligence does not accurately reflect how the human mind really works. If we are going to program AI, it is going to have a lot of human tendencies because we don’t know anything else, and we can’t quite see the human-related side of our intentions (because we’re human!) But is this what we really want to create? Humans can be created any time and there are more than enough already. What is it we are really itching to find out? And what could happen if we can’t see through ourselves?

References

Feigenson, L., Dehaene, S., and Spelke, E. (2004) Core systems of number. Trends in Cognitive Sciences, 8(4), 307–314

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s