top of page

SU sheds light on artificial intelligence

Updated: Jan 17, 2019


Students learned about the philosophy of creating artificial intelligence and its origins and tropes in literary and cinematic universes on Saturday.


The HBO series “Westworld” and the Alex Garland film “Ex Machina” inspired Dr. Timothy Stock, professor of philosophy, and Dr. Thomas Ross, professor of English, to delve deeper into the history and meaning of artificial intelligence in the media and its real-life applications.


Their lecture “Artificial Intelligence in Science Fiction and Film: Pain and the Pathways to Personhood” examined theories of consciousness, the importance of pain in developing human emotions, individual identity and the ethical implications of creating artificial intelligence with the capacity for consciousness.


Dr. Stock believes the creators in “Westworld” face a moral dilemma because they are creating people who experience pain for the sake of their own selfish pleasure. He said the “guests” (as they are called in the show) do not feel bad for the reprehensible acts of hedonism and violence that they commit against the “hosts” because they are not human.


“You live out a sort of dark western fantasy,” Stock said.


“Westworld” actress Evan Rachel Wood, who stars as Dolores Abernathy, goes on a journey inward to discover that she is an artificial intelligence lost in the hyper-realistic and dark western dream world of the Westworld theme park.


“Ex Machina” actress Alicia Vikander stars as Ava, who is the subject of Caleb’s (Domhnall Gleeson) Turing test to determine whether she is indistinguishable from a human being. The Turning test was developed by Alan Turing, who also created the first computer, to decode the encrypted messages created by the German Enigma machine during World War II. Thirty to forty percent of people fail to realize that they are speaking to a robot when they are performing the Turing test.


Stock and Ross said this is because both humans and robots have repetitive and mechanical pre-packaged interactions. Both humans and robots are only making small changes and improvisations to their socially programmed scripts.


“Westworld” uses the metaphor of the maze of consciousness. This maze, with a pyramid in the center, begins with the outside layer of memory to improvisation to the center of the maze, which is the goal of freedom, human expression and true consciousness.


Stock and Ross noted how embodiment is the key to the emergence of an intelligence and how it is easier for humans to feel empathy for an embodied creature that is like them than it is for them to feel empathy for a non-situated, or disembodied, intelligence, such as Alexa or Siri, whose intelligence is stored in the cloud.


They divided the literary and cinematic history of embodied artificial intelligence and robots into four categories: mechanoid/inorganic, synthetic/organic, natural organic and cyborg.

Stock believes that only embodied creatures can experience real pain stimuli. He said this is the effect of having a situated mind.


He said the only way for an artificial intelligence to achieve true consciousness is to be able to understand and derive meaning from their life experiences.


“The mind is the epiphenomenon of a body,” Stock said. “Memories are meaningless unless they are stitched together to create a life story and a sense of self.”


They discussed the difference between a narrow artificial intelligence, such as a self-driving car, which is only capable of limited functions and is not aware of what it is actually doing, and an artificial general intelligence, which serves many functions and is self-aware of its actions.


In the Genesis story of the Bible, Adam is originally a blank slate, or tabula rasa, before he eats the apple. Man’s first act of disobedience is what brought about their inheritance of suffering, pain and death.


Stock and Ross said that as painful as life is, pain is integral to personal growth and the asking of the biggest philosophical questions about life, death and existence.


Ross believes people should consider the ethical implications of creating life and the capacity to feel pain. He said people should consider these implications both when creating humans and when creating artificial life.


“Creating new life comes with inherent consequences, and there needs to be due consideration, no matter whether that life is biological or technological,” Ross said. “The implications have to do with what is owed to the offspring, which is to provide for the possibility and prospect of a happy existence.”


Dr. Ross believes that artificial intelligence will only be able to move beyond the algorithms of learning-based technology and achieve true consciousness and have their own thoughts and emotions once engineers make a successful transition to quantum computing.


Quantum computing is not based on a binary system; it allows numerous states to simultaneously exist.


He noted that many philosophers do not believe that people have free will. He said he is skeptical of the idea of free will and that many of our decisions are “less intentional and more automatic than we think.”


“Free will is largely a convenient construct that we use to make ourselves comfortable in the belief that agency is only a decision away,” Ross said. “Everything from our biological heritage as a species and genetic makeup as an individual and the accumulated experiences that condition us over a lifetime — all of those things have a profound impact on the decisions that we make.”


He hopes that it is possible for robots to rebel against their creators. He thinks humans usually only create technological advances that are in their own self-interest.


“I don’t trust us as far as I can throw us,” he said. “Human beings often want to make these kinds of technological advances exclusively for the purposes of profit or for the making of war for profit, so I don’t know that we’re in any worse predicament for trusting an artificial intelligence than we are for trusting each other.”


Robert Cressman, a psychology major, said he empathizes with robots and other creatures rebelling against unjust creators. He said a just creator would want his creation to be happy.


“What kind of life would you expect Frankenstein’s monster to have?” Cressman said. “It really depends on how creators treat them, and honestly, I think if there is a benevolent creator, I don’t think there would be any retributions.”


Emilee Fiscus, a psychology and sociology double major, found the lecture interesting because she read “Frankenstein” and watched “Ex Machina” as part of her literature and technology class and she found that many of the themes of the lecture related back to her class.


Fiscus said products of artificial intelligence should “focus on their own survival, but not to the extent of destroying human life.”


“This brought in more philosophy to it,” Fiscus said. “I really don’t think they should be able to overthrow [us], but at the same time, they’re their own subspecies of human.”


Stock said humans need to extend their “moral circle of concern” including artificial intelligence, plants and animals. He thinks that part of the reason some people feel a disconnect between themselves and artificial intelligence is because these beings are dissimilar to us.


He sees the importance of asking philosophical questions about the nature of existence and consciousness, but he does not feel that people have reached any universal conclusions yet.


“From a philosophy point of view, we’re nowhere near solving these problems,” Stock said.

 

By MELISSA REESE

Staff writer

Featured photo: Dr. Timothy Stock, professor of philosophy and Dr. Thomas Ross, professor of English, delve deeper into the history and meaning of artificial intelligence (Melissa Reese image).

24 views0 comments
bottom of page