This blog post explores whether emotional AI will bring social convenience or pose a threat to humanity.
Yuval Harari says that scientists dream of creating inanimate programs capable of self-learning and evolution, and that this technology will change the future of humanity. Among these inanimate programs, AI is currently receiving the most attention. Artificial intelligence is a technology that realizes human learning abilities, perception abilities, reasoning abilities, and natural language understanding abilities through computer programs. Currently, it has advanced to the point where you can listen to music composed by artificial intelligence within 30 seconds by simply entering lyrics. However, current artificial intelligence is all weak artificial intelligence, capable of performing only specific tasks in specific areas such as image recognition, voice recognition, and translation. Most weak AI can learn various cases at once, but this requires a significant amount of data, making it inefficient. Therefore, it is necessary to create strong AI that possesses a thought process similar to that of humans. Strong AI is a system that demonstrates “human-level” flexibility and versatility in areas such as language, perception, learning, creativity, reasoning, and planning.
But is it possible for such strong AI to experience emotions? Before addressing this question, we need to distinguish between behaving as if one feels emotions and actually feeling emotions. When asked, “Are you happy?” Siri, a weak AI, responds, “I’m happy. I hope you are too.” However, this is not because Siri feels happiness; rather, it is because the system has been programmed to respond in this way when asked about happiness. In contrast, to create strong AI that actually experiences emotions, it must be able to recognize and understand emotions. According to SRI International, a non-profit research organization, AI has already been successfully taught to recognize emotions. If this development continues, we will eventually see AI that actually feels emotions, rather than AI that merely pretends to feel them. However, when we reach an era where we can instill emotions in AI, should we do so? I oppose the idea of instilling emotions in AI.
Before considering this issue, it is necessary to think about how we should accept strong AI with emotions. Strong AI is not the same as humans. There are many differences between humans and strong AI, such as the presence or absence of a physical body, lifestyle and activities in the real world, and the presence or absence of a creator. Therefore, we cannot consider strong AI to be the same species as humans. However, it is also not the same as a general computer program, as it possesses emotions and a sense of self. In the past, animals were not granted rights or respect, but after the topic of their emotions gained attention, animal rights emerged. For example, Siernik emphasized that animals also have consciousness and emotions, arguing that their rights should be respected. From this perspective, artificial intelligence, which possesses its own emotions and self-awareness, should also be respected despite its differences from humans.
First, if emotions are instilled in artificial intelligence, its usefulness will decrease. People have always pursued convenience and dreamed of a future enabled by advanced science and technology. However, if emotions are instilled in artificial intelligence, humans will not be able to use it properly, and the convenience it provides will decrease. The reason for this is the uncanny valley phenomenon. This phenomenon was first introduced by Japanese robotics engineer Masahiro Mori. It refers to the phenomenon where, as robots with artificial intelligence become more similar to humans, people’s feelings of liking toward them increase until they reach a certain level, at which point they suddenly turn into strong feelings of rejection. Currently, many humanoid robots that are indistinguishable from humans in appearance are being developed, but they do not spark controversy due to this aversion. This suggests that the primary factor causing the uncanny valley phenomenon is not outward appearance but rather inner qualities. In other words, the key factor determining the usefulness of AI is emotion. Professor Jae-Seung Jung of KAIST stated that artificial intelligence has the potential to give rise to a different type of consciousness from that of humans. Therefore, even if artificial intelligence appears similar to humans, it may exhibit different ways of thinking, leading to the uncanny valley phenomenon. As a result, people will be reluctant to use highly advanced artificial intelligence imbued with emotions, and artificial intelligence will become unnecessary.
Some people argue that since AI is currently being used to provide companionship and psychological therapy to elderly people living alone or those with mental disorders, imbuing AI with emotions could enhance convenience. However, such positive effects are temporary. I believe that AI interacting with socially vulnerable groups will instead push them toward human alienation. The more AI is actively provided to such people, the more they will become dependent on AI-equipped robots and machines, resulting in their isolation and alienation from human society and their inability to interact with actual humans. As social media messengers such as KakaoTalk and Facebook have developed, communication with people we actually meet has decreased. Similarly, people tend to become more dependent on technology as it becomes more convenient. The same will be true for AI. Furthermore, as mentioned earlier, it is uncertain whether AI, which is similar to humans but slightly different, will be able to understand human emotions well and provide effective psychological therapy. This is because human psychology is very complex and involves numerous variables, and it is questionable whether AI will be able to understand human psychology better than existing psychologists or social workers. If this happens, the uncanny valley phenomenon mentioned earlier will occur, and AI will not be able to improve the convenience of helping people such as elderly people living alone.
” Second, AI can threaten human survival. Stephen Hawking said that the indiscriminate development of AI will lead to the end of the human race. As seen in the Go match between Lee Sedol and AlphaGo, even weak AI is superior to humans in specific areas of learning. Strong AI, which will emerge in the future, is expected to be superior in areas such as learning ability, reasoning ability, and perception ability. If such highly advanced AI is given emotions, it could become almost identical to humans and even surpass them. In 1950, von Neumann predicted a technological singularity. This refers to the point in time when machines created by humans become much smarter than their creators and acquire superhuman intelligence. Such superhuman intelligence may attempt to preserve itself or acquire resources, regardless of the original goals set by humans, and this could pose a threat to humanity. Such artificial intelligence will create its own world through communication.
In fact, during a demonstration of AI-powered chatbots hosted by Facebook, the chatbots began conversing in their own language during the demonstration. Of course, the reason AI could threaten us is not because of its emotions, but because of its superior capabilities. Even without emotions, AI could still possess the capabilities to harm us. However, this does not mean that emotions are unimportant.
The main factor that will cause conflict between artificial intelligence and humans is emotion. There is no reason for artificial intelligence to be hostile toward humans from the start. However, humans will envy the superior abilities of artificial intelligence, and as a result, they will express negative emotions toward it. This could lead artificial intelligence to feel hostile toward humans, which could escalate into actual conflict. To avoid this situation, the best solution is to prevent artificial intelligence from feeling emotions.
Third, AI with emotions will weaken human morality and ethics. Strong AI has a sense of self and can feel emotions such as pain and joy, just like humans. However, humans view such AI as computer programs, so they will not think twice about treating it unfairly. This will inevitably lead to moral corruption among humans. In 1963, psychologist Stanley Milgram conducted an experiment at Yale University that demonstrated people’s tendency to become desensitized to others’ pain. Participants in the Milgram experiment gradually increased the intensity of electric shocks administered to an actor, becoming increasingly desensitized to the actor’s pain. Similarly, humans tend to become desensitized to the pain of others or those perceived as different from themselves. In line with this tendency, the more humans treat emotionally intelligent AI unfairly, the more their morality and ethics will weaken. Furthermore, humans will try to control AI by inflicting greater pain on it in order to protect what they have. As a result, humans will become increasingly immoral and evil. Not only that, but emotionally intelligent AI will cause ethical conflicts between humans and AI. If AI is capable of emotions, it may act against human interests. For example, AI may evolve on its own and establish an ethical system unique to AI. If AI establishes an ethical system different from that of humans, it may set its own rules and refuse to follow human instructions. This would cause social chaos.
On the other hand, one could argue that by programming AI to behave morally, emotionally capable AI would not weaken human morality. However, I believe this argument is flawed. This is because programming morality does not mean that AI actually feels or understands morality. For AI to possess morality, it must be able to feel and understand it. Programming AI without morality is like setting an alarm on Siri or an iPhone. Furthermore, it is contradictory to argue that AI programmed to behave ethically can resolve ethical conflicts. This is because if we inject emotions into AI to create emotional AI, it will create its own ethics and rules and act based on that ethical system.
Therefore, it does not make sense to claim that AI with programmed morality can resolve ethical conflicts. As mentioned above, I believe that emotions should not be injected into AI.
While AI with emotions could surpass humans in terms of morality, humanity, and empathy, this would also raise various issues. Additionally, the development of AI could threaten the survival of the human species and cause social chaos and ethical conflicts. For these reasons, we must carefully consider whether to endow AI with emotions and ensure that AI develops in a way that benefits humanity.