Must artificial intelligence resemble humans in order to be recognized as useful intelligence?

In this blog post, we will revisit the criteria for evaluating artificial intelligence through the Turing test and the Chinese room argument, and consider whether intelligence can be useful even if it does not resemble humans.

 

Computers have developed at an unimaginable speed over the past few decades. Artificial intelligence, a product of computer science, has accelerated in pace as computers have advanced. As seen in numerous novels and films, people have long been both fascinated by and fearful of artificial intelligence. The AI we encounter today can understand our speech and provide limited responses, but it is possible that AI capable of thinking like humans could emerge within a few years. As AI has developed, there have been many efforts to evaluate it. Among these, the most famous measure is the Turing test.
In 1950, shortly after the first computer was invented, Alan Turing proposed the Turing Test to evaluate whether computers could be as intelligent as humans. The test is simple: judges engage in an online chat with either a computer or a human, and after five minutes of questions and answers, if more than 30% of the judges cannot distinguish between the computer and the human, the AI passes the test. The first instance of passing this test appeared 64 years after its conception, with a computer named “Eugene” developed at the University of Reading in the UK. To avoid awkwardness during the test, “Eugene” was tested based on a 13-year-old boy whose native language was not English. While some argue that this does not qualify as a proper pass due to these limitations, other scholars predict that an AI capable of passing the Turing Test without such restrictions could emerge as early as 2029. Considering the pace of AI development, future AI is expected to become far more similar to humans than we currently imagine, eventually taking over tasks involving the analysis and processing of vast amounts of information that humans struggle to handle.
However, there are also those who question the development of AI. Does AI necessarily have to resemble humans? Is AI considered better the more it resembles humans, and inferior if it does not? Of course, having certain standards for creating AI makes it easier to set goals and move forward. And it is undeniable that humans are almost the only standard. However, humans are not an appropriate standard for AI.
The Turing test itself cannot measure how close AI is to human intelligence. While it is acknowledged that the Turing test can assess whether a computer appears human-like, some scholars question whether passing the test proves that a computer possesses human intelligence. For example, the Chinese Room argument, a famous argument against the Turing test, makes us rethink the meaning of the Turing test. The Chinese Room argument involves putting a person who does not understand Chinese in a room and giving them a list of questions in Chinese and the answers to those questions. Even if the person in the room cannot speak Chinese at all, they can read the questions and answers and respond to the examiner’s questions. In this argument, the person in the room has no understanding of Chinese, and even if they answer perfectly, there is no way to know whether they understand Chinese. Therefore, the argument is that the Turing test, which determines whether a machine has intelligence, cannot actually determine whether it possesses intelligence or is simply reciting stored answers. Seol’s argument is not that artificial intelligence cannot exist, but that stricter criteria are needed when judging artificial intelligence. Of course, the principles behind how humans understand questions and respond to them have not yet been clarified, but I am confident that they are different from the way artificial intelligence currently receives questions and responds with stored answers. Therefore, if true artificial intelligence is to think in a manner similar to humans, the first step would be to understand how human intelligence works.
Efforts to understand human intelligence and program it have been around for a long time. But before we can do that, can we be sure that human intelligence works the same way for everyone? For example, the controversy over the dress seen by people earlier this year raises this question. The dress controversy made it clear to the whole world that people see colors differently. Some people saw it as white and gold, while others saw it as blue and black. If this controversy had not spread worldwide, some people would have believed that white and gold were the real colors, while others would have seen blue and black. This controversy shows that even a part of intelligence, such as vision, specifically the ability to recognize colors, varies from person to person, and we cannot know whether everyone’s intelligence functions in the same way.
Given that we ourselves do not fully understand how intelligence works, can we use human standards to evaluate computers? Furthermore, even if we did understand how intelligence works, would that have any significant meaning? Must artificial intelligence think exactly like humans to be considered artificial intelligence? Let us imagine that technology advances to the point where we make contact with an extraterrestrial civilization. Suppose this alien civilization developed in a different environment from ours, communicating, thinking, and perceiving the world in ways different from ours. How would we evaluate the intelligence of this alien civilization? We cannot dismiss their civilization as non-existent simply because they think in ways we cannot understand. Using humans as the standard for evaluating intelligence can lead to situations that are difficult to comprehend. However, if we acknowledge that aliens with different ways of thinking—even if we cannot understand them—possess intelligence, then computers that respond similarly to humans or are useful to humans could be recognized as artificial intelligence.
When determining whether a device created by humans, such as a computer, possesses artificial intelligence, it is not necessary for it to operate in the same way as humans. Even if it does not function in the same way as human intelligence, if it is sufficiently useful to humans and helps improve our lives, it should be recognized as artificial intelligence, and this is how it should be.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.