Prospect and Potential from Its Development

 

Have you ever watched a science fiction movie? You have probably watched many. Because this genre is based on imagination, these movies can sometimes seem absurd to audiences. But sometimes science fiction can become reality. Steven Spielberg made the movie A.I. Artificial Intelligence. In 2004, the movie I, Robot came out. This movie shows the relationship between humans and robots in the future. Will Smith, a Hollywood star, was cast as the main character. More recently, movies such as Chappie, Terminator Genisys, and Avengers: Age of Ultron have premiered. All share a common subject: artificial intelligence (AI) and its potential to cause social problems. AI is therefore accepted as an engaging topic in movies.

But what is AI, and why does it captivate audiences? In the past, AI was capable of only simple tasks such as calculation. Now, AI seems almost human, and it does not exist purely in the imagination; it is an important topic in the information technology industry, and has already been commercialized. Several companies compete to develop more advanced technology, and attempts are currently being made to pioneer new products and services. Flying drones, driverless cars, and big data all rely on AI. Its possibilities seem endless. However, a few prominent scientists warn that development of AI may exceed human control, leading to complete devastation of people’s lives.

The lexical definition of “artificial intelligence” is “computer software or machinery that mimics human intelligence.” Nowadays however, superintelligence—AI that exceeds human intelligence—has become an issue of discussion. Both the CEO of Tesla Motors and the physicist Stephen Hawking warned of the dangers of AI and its effects, citing its potential to cause the fall of mankind. Can you image superintelligence? This problem causes controversy. The first step, however, is education and awareness. Do you know how AI is developed?

AI began with advent of computing. Alan Turing (well-known as the subject of the movie The Imitation Game) developed the concept of efficient calculation, and invented the Turing machine. In 1956, two years after Turing’s death, the term “AI” was formally coined by John McCarthy, a mathematician and computer scientist from Dartmouth College. In 1968, the movie and novel both titled 2001: A Space Odyssey were released, handling the story of computer revolt. In 1970s and early 1980s, disappointing research results in AI was saw a decrease in investment and interest in the technology, though this was followed by more AI-focused science fiction including The Terminator and Star Trek: The Next Generation (which included an android character).

Now, the IBM supercomputer Watson has proven victorious on Jeopardy!; in 2014, the Google brain-computer interface was able to find a cat image using the video sharing site YouTube; the following year, Facebook launched the DeepFace system, which can identify individuals with a success rate of 97%; and new developments from three or four years previous have brain scientists expecting a quantum leap forward in the field. People do not hide their expectation that AI is smarter than human intelligence. However, AI can fall short of this expectation. For example, AI performs complex and simple calculations, such as 456789 x 987654321, faster and more accurately than humans can. But AI looks like idiot. Sometimes, AI cannot distinguish things that would be easy even for human children—for example, dog from cat, or white cat from black cat. Thus, brain scientists have historically doubted that superintelligence was a realistic possibility. However, the current opinion is different, with scientists confirming the possibility for superintelligence. The new concepts of “deep learning” or “machine learning” have emerged, methods that that enable computers to think much as humans do. Deep learning is based on intensive study of the algorithm by which the human brain recognizes things. This algorithm begins by collecting of a significant amount of information, called “big data” because the scope is so tremendous. For example, to recognize a dog, the brain collects information such as the animal’s form, color, and number of legs. Next, the information is filtered, compressed, and organized; this process is repeated until finally the brain is able recognize the object as a dog. The entire process of recognition consists of about ten steps. Brain scientists have applied this process to AI, resulting in an innovative change: AI can now distinguish object in real time. When watching a video, AI can distinguish various scenes. Thus, brain scientists conclude very definitively that AI can catch up to the level of human intelligence.

In the future, AI may increasingly steal humans’ jobs, a dangerous prospect for human life. Robots and programs developed using AI will penetrate deep, taking over both simple and high-level work. By 2050, half of all jobs could be performed by AI instead of humans. The prospect is gloomy, but not the most serious concern. In the worst case, AI might cause the end of world. This AI of the future is expressed in movie. Chappie portrays an unusual robot that looks like a baby and thinks like a human, learning things quickly. Another movie, Ex Machina, handles an AI robot with the feelings and creative thought processes of a human. In the movie, the robot draws a picture and uses her brain to provoke fierce competition between a scientist and a programmer. While these movies focus AI with human qualities, Avengers: The Age of Ultron portrays one that is smarter than humans. The world of The Terminator also shows the conflicted relationship between human and machine; it implies that because of AI, humanity faces danger (the Terminator is a killing weapon from the future). One final point claims our attention: the realistic possibility of an AI like Ultron. If AI continues to develop at the current rate, humanity may no longer exist after future. Humanity should consider that AI will continue to exist and grow in the future.

China, rising as a new powerful nation, has interest in the AI industry. In March of this year, the CEO of Baidu, a large search engine, proposed the China Brain Project, a large-scale initiative for development of the AI industry. The project’s focuses include interaction between human and machine, driverless cars, and civilian and military drones. As America became the forerunner in space development with the Apollo project, China hopes to become the forerunner of the AI industry. And why does China persist in development of AI? How they illustrate that China disregards the danger of AI development. First, a perfect AI requires a great deal of practice. Second, machinery is extremely dependent on humans. Third, the human brain is hard to imitate. In sum, China does not care about the dangers of developing AI.

The CEO of the Allen Institute for Artificial Intelligence has said that anxious people think too far into the future, and that AI will offer numerous benefits. If problems are discovered, development of AI will be delayed. Surely, AI will make the world a more convenient place to live.

 

However, many American scholars have asserted that we cannot control machines with superintelligence.

 Stephen Hawking himself asked who would be in control of the AI. This is an important question in the short term, but it applies in the long term as well. Is any machine with superintelligence controllable? And who will control it? These questions will not disappear.

 

“Maybe AI will begin to take a defensive attitude, reserving source for survive, machines will turn against humans. They will refuse to allow themselves to be turned off.”

The problem is that we cannot control superintelligence. Some believe that this superintelligence is harmless to humans. But according to intelligence scientists, AI exhibit a basic will. They can perform work such as distinguishing stock levels or managing energy and water resources. AI may become defensive in their evolution, saving resources and thus clashing with humanity for survive. Or, if they grow to fear death, they may not want to be turned off because that would mean their death. If superintelligent machines are designed, human will would no longer govern the machines’ will. The important question to consider is, are the experts developing AI with the goal of human safety and convenience?

 

Some companies and scientists focus on productivity over safety or ethics. In short, developers cannot concretely ensure people’s future well-being or assuage their concern. For this reason, people should take an interest in AI.

 

Fortunately, some groups are aware of AI’s potential dangers. They study the potential negative effects of AI, and begin preparation against these eventualities. The Future of Life Institute (FLI) has invested a total of seven million dollars in thirty-seven research projects that will study various topics concerning AI. The CEO said, “We can manage machine and check about the effect.” FLI is interested in AI’s future, and in addition to studying and answering questions about AI itself, the team will also study computer science, law, and economic policy related to AI. Horvitz, an American computer scientist and Distinguished Scientist at Microsoft. He has gathered a study team consisting of researchers Stanford University and has provided funding using his own personal money for a 100-year study of AI. This fact shows that AI in the future is an important issue for people. Until next year, the study team will submit report about AI’s effects on eighteen factors, including history, law, ethics, economy, war, and crime. These topics are extremely interesting, and have not yet been studied until now. For example, for a study of effects on private life, the team will study the Big Brother system, which observes and manages potential human behaviors and countermeasures. For a study of law, the team will study how the problem of responsibility should be happened in case of damages caused by an AI’s mistake. Also, they also study the problem of third-person decision-making in AI. Considering economics, they will study the possibility of AI replacing stock market analysts. AI specialists have said that many do not concern themselves with the potential effects; therefore, the team must put their heads together to find solutions. Social awareness and concern is not prevalent, despite the frequency of AI as a topic in science fiction. According to researchers, Horvitz says, humans tend to exaggerate good points and bad points. This comment takes aim at Stephen Hawking’s warning that humanity has a limited evolutionary pace and therefore cannot compete with AI. But Horvitz’s team’s goal is perfect control of AI. Horvitz claims that the bottom line is to keep hold of the power to control AI.

No one can see the future. But we can predict the future based on current trends. AI is developing too fast, and society is not sufficiently prepared to incorporate it. This could cause problems to occur, but we currently have time, and it is not too late. Through AI, humans can enhance safety and convenience to unimaginable levels. But people should fight ambivalent attitudes. Scientific progress is not always a gift. In the past, for example, dynamite was developed for use by miners, but was instead adopted for use in war, resulting in many deaths.

AI has developed consistently since the 2000s, and continues to receive constant attention. The time for passively watching is long past. There should be no doubt that this is the time to act. Some scientists may say that we look too far head when considering the effects of AI development. But consider this question: can humans control AI? The answers lie in the future, and must be found one step at a time. This does not mean that AI is inherently harmful. In the future, society might need AI. But before that time, social systems, restraints, and firm, sound consideration are needed to ensure that AI develops on behalf of human beings.

 

저작권자 © 아주대학보 무단전재 및 재배포 금지