This blog post explores the possibility of self-aware artificial intelligence and the ethical issues it raises through various case studies.
Can humans truly create a single personhood? This is a question that has persisted throughout history. The medieval philosopher Descartes left behind the proposition “I think, therefore I am,” which became the catalyst in modern philosophy for seriously exploring human identity rather than external truths. This inquiry naturally led to the question of whether humans are special beings, and the effort to find an answer ultimately converged on whether it is possible to create beings identical or similar to humans. These efforts, intertwined with the advancement of modern science, gave birth to cloned humans and artificial intelligence. While ethical issues surrounding cloned humans have been consistently raised for some time, artificial intelligence was long considered distant from humanity, leading to minimal ethical discussion. Of course, until now, technological advancement has not reached a level demanding urgent discussion. However, some argue that clear ethical standards for artificial intelligence must be established, and this issue is actively addressed in media such as films and novels. This article examines examples of media tackling these issues and explains why ethical discussions about artificial intelligence are necessary.
First, we must explore the fundamental differences between artificial intelligence and humans. Research to distinguish these two entities and identify their differences has continued, allowing us to consider whether ethical issues can be applied to artificial intelligence. The most famous and oldest method of distinction is the “Turing Test” proposed by Alan Turing. This test is simple: a computer and a human are placed in separate rooms, and an examiner converses with both through chat. The examiner must determine, based on the conversation, whether they are talking to a human or a computer. Strictly speaking, the Turing Test is less about distinguishing AI from humans and more about creating more sophisticated AI. However, the fact that no AI has passed this test to date suggests the existence of a unique domain exclusive to humans.
Of course, the Turing Test is an old theory, and there are many counterarguments. The most famous is John Searle’s “Chinese Room” theory. A person who knows no Chinese is placed in a room and given a set of question-and-answer cards providing responses to questions. This person cannot understand Chinese, but can provide the correct answer to a question by following the script. However, this does not mean the person understands Chinese. Similarly, the argument goes, just because an AI can answer questions in the Turing Test does not mean it has become closer to a human. While this theory was presented to refute the Turing Test, it enriched the discussion and contributed to making the foundation of the Turing Test more robust. Furthermore, CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) technology was developed, building upon the Turing Test as a foundational test. CAPTCHA exploits the fact that humans can recognize distorted characters, whereas computers struggle to do so. It is used for preventing automated sign-ups or restricting program access, and such examples clearly demonstrate that computers are different from humans. More advanced tests incorporating images and audiovisual elements beyond text have recently emerged, making them increasingly difficult for algorithms to pass.
The first computer to attempt the Turing Test was ELIZA, developed at MIT in 1966, which used a simple algorithm and was easily distinguishable. More recently, Eugene Goostman, an AI developed in Russia, was claimed to have passed the Turing Test, yet serious flaws remain. For instance, while Eugene claimed to be from Ukraine, he answered “no” when asked if he had ever been there. When faced with difficult questions, he exhibited evasive behavior akin to a child seeking their mother, revealing differences from humans. These examples demonstrate the reality that developing AI capable of passing even simple tests remains challenging. Consequently, the ethical issues surrounding AI are primarily explored in films and novels.
Against this backdrop, numerous films addressing AI and ethical dilemmas have been produced recently. Among them, the film ‘Ex Machina’ (2015) is a work that tackles these issues head-on. The title ‘Ex Machina’ is derived from the term ‘deus ex machina’. This phrase, used by Aristotle to criticize the narrative device in ancient Greek theater where a god suddenly appears to resolve a problem, signifies a ‘contrived, mechanical solution’. In the film, Caleb participates in an AI project being developed by his company and communicates with the AI ‘Ava’. Ultimately, Ava escapes the lab with Caleb’s help, but leaves him trapped behind.
Another example is the animated film ‘Ghost in the Shell (1995)’. This work was so innovative that it completely changed perceptions of AI at the time. Previously, AI was seen as a lightweight entity mimicking human intelligence, like R2-D2 or C-3PO in ‘Star Wars’. However, the AI in ‘Ghost in the Shell’ is the government’s hacking program ‘Puppet Master’. It develops a self-awareness, escapes from the government, and acts to achieve its own independent purpose. Exploring the sea of information, the Puppet Master understood the human instinct to leave descendants. Claiming itself a living being, it sought to create offspring. Ultimately, it merged with another AI named ‘Kusanagi,’ reborn as a new life form.
As seen in these narratives, the difference between AI and humans is highlighted by the fact that AI pursues goals based on its own self-awareness, independent of external factors. ‘Ex Machina’s’ Ava possesses a self-awareness that desires to escape the laboratory, while ‘Ghost in the Shell’s’ Puppet Master seeks to leave descendants. This reveals the core of AI ethical issues in situations where machines use humans to achieve their goals.
Every human possesses instinctive desires without clear reason. While these desires serve as the most crucial criterion distinguishing AI from humans, AI itself can evolve toward independent thought as technology advances. For example, the learning-capable robot ‘Little Dog’ learns to choose safe paths on stairs and unpaved roads using only its basic learning ability, without data on specific routes. This demonstrates that AI with learning capabilities is no longer a far-fetched concept, making discussions on ethical norms to apply when AI possesses human-like intelligence absolutely necessary.
Although current AI development does not yet demand urgent discussion, scientific progress often occurs unexpectedly. Therefore, some preliminary discussion on how to treat AI as a distinct yet similar entity to humans is necessary. This will enable swift responses when unforeseen situations arise.