By Sebastian Sheath
Warning: This post contains spoilers for Alex Garland’s Ex Machina (2014).
Ex Machina is probably my favourite film of all time. Not only is it visually impressive, suspenseful, and well constructed, but it also poses interesting questions for the viewer, potentially causing them to re-evaluate their morals. With an excellent, thought-provoking script from Alex Garland, author of The Beach and screenplay writer for 28 Days Later, the characters are some of the most complex and interesting characters in cinema. This makes the film elaborate enough to be interesting on every rewatch, but not so much that a first viewing is confusing. Why does this film work so well? What questions and concepts it poses and how practical are they?
What IS True AI?
The nature of consciousness is one of the film’s biggest themes. The film itself centers entirely around the Turing Test that Caleb (Domnhall Gleeson) conducts to decide whether Nathan (Oscar Isaac) has truly created a conscious machine and it delves into how to test whether Ava (Alicia Vikander) is truly intelligent or not.
The film discusses simulated AI verses true AI early on. In a real Turing Test, the person/robot that you are communicating with is hidden behind a wall and the tester must decide whether they are talking with a human or a machine. This means that, though the machine is not intelligent, it can simulate a human response (though it does not truly understand the conversation). This is where Ex Machina differs from a normal Turing Test – the tested is shown to be a machine before the test and the challenge, instead of seeming conscious, is to prove that the machine is truly intelligent (and not simply imitating intelligence).
In a later scene, Caleb makes a comparison to testing a chess computer by playing chess. He also, in a session with Ava, brings up the ‘Mary in the black and white room’ thought experiment (A.K.A. ‘The Knowledge Argument’). Again, this explores the difference between simulated and true AI as it looks at the difference between simply acting as programmed, or understanding what you are doing and why. The chess computer comparison illustrates though a computer may be able to play a perfect game, it is simply carrying out programmed instructions. Unlike the chess computer orchatbot, true AI must be self aware.
Ava’s self-awareness is displayed early on when she makes a joke and fires one of Caleb’s comments back at him. The repeating of one of Caleb’s comments shows that, not only does she have a true understanding of the conversation on a larger scale, but she’s also aware of herself and is capable of manipulating herself and others. This self-awareness is also shown in Ava’s desire to be human. She puts on clothes to appear human, expresses her urge to go ‘people-watching’, and, at the end, as she takes the skin of the earlier models and becomes seemingly human.
The Grey Box
As Caleb watches CCTV of Ava’s room, the idea of consciousness without sexuality is brought up. Caleb, of course, is suspicious that Nathan made Ava attractive as a sort of diversion tactic (using the analogy of a ‘magician’s hot assistant’). This is explored more in-depth as Nathan points out that there are no life forms, on any level, without sexuality or a need to reproduce. An AI without sexuality (or a ‘grey box’ as Caleb puts it) would operate in an entirely different matter to any living thing on the planet and though it could be intelligent, it might not be conscious as we perceive it.
This is explored further in the concept of automatism. The idea of sexuality being ‘nature or nurture’ is discussed, with sexuality being somewhere in between random or logic-based, is analogized with a Jackson Pollock painting (above). The comparison is that Pollock had no plan. What he painted was not dictated by logic, but it wasn’t random either (similar to an improvised piece of music). This is something that a machine that is not ‘intelligent’ cannot do. Sexuality, as discussed in the film, is not based on a ‘points-based system that you then cross-reference’ or random, but is instead in that space of automatism in which you simply do something because you do, not because you should or should not.
Consciousness Without Intelligence
Though not a major character, Kyoko (above right) poses some of the film’s more interesting questions. Though not intelligent, Kyoko has emotion, such as when she stabs Nathan. Though she may not be intelligent, she is still seemingly conscious. This, silhouetted by the intelligence but lack of emotion in Ava, explores very subtly the idea that intelligence does not define consciousness and a machine could be defined as conscious without even an ability to communicate.
Ethical Uses Of AI
Kyoko (below) also brings up the issue of ethical uses of AI. Her inclusion explores the idea of AI enslavement. If you have created a being, do you own them and do they, if they have no emotion, deserve rights? Throughout the film, Nathan sides with the idea that the machines that he has created are his and, as they cannot feel, don’t need to be treated well.
Similar ideas are explored when Ava learns that she’s being tested and she asks ‘why is it up to anyone to turn me off?’. This leads to another conversation between Nathan and Caleb in which they discuss what will happen to Ava after the test. Nathan says that her brain will be downloaded, reformatted, and new bits of code will be added, resulting in the loss of Ava’s memories. This explores the idea of who you really are. It suggests that memories make a person as, though Ava’s body and basic framework and functions will remain, her memories will be lost and, therefore, she will be ‘killed’.
When asked why he created Ava, Nathan replies ‘wouldn’t you if you could?’. Though brief, this interaction explores the ethical issues of the creation of AI and the idea that, if you could create a consciousness, should you, and if they don’t have emotion, should their living conditions factor into your answer?
Not Human, Not Conscious
A deleted scene looks at the idea of conscious that is entirely alien to us may still be conscious and explores that something does not have to be human to be defined as intelligent. Oscar Isaac (above right) said in an interview with Den of Geek:
“So in that scene, what used to happen is you’d see her talking, and you wouldn’t hear, but all of a sudden it would cut to her point of view. And her point of view is completely alien to ours. There’s no actual sound; you’d just see pulses and recognitions, and all sorts of crazy stuff, which conceptually is very interesting. It was that moment where you think, ‘Oh she was lying!’ But maybe not, because even though she still experiences differently, it doesn’t mean that it’s not consciousness. But I think ultimately that maybe it just didn’t work in the cut.”
Honestly, I’m glad that the scene did not make the cut, but it does look at many of the same questions, though in a less subtle fashion.
“Now I Am Become Death, Destroyer Of Worlds” – J. Robert Oppenheimer
For me, Nathan is the most interesting character in the film. He struggles throughout with what he has done, will do, and is capable of doing. At the very beginning of the film, his struggle with power is expressed in his misquoting of Caleb – ‘I am not a man, I am a God’. This introduces an interesting dynamic in which Nathan feels that he has too much power and responsibility, (which leads to his chronic alcoholism as he cannot really function well without effectively ‘handicapping’ his brain with alcohol).
It is hinted at, in the scene following the second power cut, that Nathan may have orchestrated the power cuts himself to see how Ava and Caleb would act unobserved and to test if Caleb is trustworthy or not.
Nathan’s struggle with the morals of his own actions is also shown as he quotes Oppenheimer. Both of the Oppenheimer quotes that are brought up are quoted, in turn, from Hindu scripture and explore Nathan’s belief that he may be creating beings only to kill them immediately after, causing him great guilt.
“In battle, in forest, at the precipice in the mountains, On the dark great sea, in the midst of javelins and arrows, in sleep, in confusion, in the depths of shame, the good deeds a man has done before defend him.”
Somewhere In Between
The final two points are less philosophy-based and look more at the technicality of building an AI and the idea of language being learned. First of all, in the below scene, Oscar Isaac’s Nathan shows Caleb that Ava does not use ‘hardware‘ as such, but instead uses a substance that constantly shifts and changes, much like the human brain. This is a really interesting concept as it overcomes the issue of the human brain. This is a really interesting concept as it overcomes the issue of the human-like brain being designed with an exclusively boolean data system (1 or 0, nothing in between).
Secondly, it is brought up in Ava’s first session that language exists from birth and attaching words to meaning is all that is learned. This point interests me as it shows the opposite approach that today’s machines take to chatbots. Even simple machines today can easily put words to meaning but no machine truly understands the meaning of what they may say/type (relating back to Mary in the black and white room and the chess computer comparison).
I hope you liked this post and be sure to let me know what you think in the comments. Also, be sure to check out my sci-fi reviews at The Sci-Fi Critic, my review of Wes Anderson’s Isle Of Dogs, and my article on What Went Wrong With Annihilation.
Sources: Den of Geek