What Is Intelligence? The Philosophy Of ‘Ex Machina’ – ScreenHub Entertainment

By Sebastian Sheath

Warning: This post contains spoilers for Alex Garland’s Ex Machina (2014).

Ex Machina is probably my favourite film of all time. Not only is it visually impressive, suspenseful, and well constructed, but it also poses interesting questions for the viewer, potentially causing them to re-evaluate their morals. With an excellent, thought-provoking script from Alex Garland, author of The Beach and screenplay writer for 28 Days Later, the characters are some of the most complex and interesting characters in cinema. This makes the film elaborate enough to be interesting on every rewatch, but not so much that a first viewing is confusing. Why does this film work so well? What questions and concepts it poses and how practical are they?

Nathan Caleb Beers
[Credit: Universal Pictures]

What IS True AI?

The nature of consciousness is one of the film’s biggest themes. The film itself centers entirely around the Turing Test that Caleb (Domnhall Gleeson) conducts to decide whether Nathan (Oscar Isaac) has truly created a conscious machine and it delves into how to test whether Ava (Alicia Vikander) is truly intelligent or not.

The film discusses simulated AI verses true AI early on. In a real Turing Test, the person/robot that you are communicating with is hidden behind a wall and the tester must decide whether they are talking with a human or a machine. This means that, though the machine is not intelligent, it can simulate a human response (though it does not truly understand the conversation). This is where Ex Machina differs from a normal Turing Test – the tested is shown to be a machine before the test and the challenge, instead of seeming conscious, is to prove that the machine is truly intelligent (and not simply imitating intelligence).

Black And White
[Credit: Universal Pictures]
In a later scene, Caleb makes a comparison to testing a chess computer by playing chess. He also, in a session with Ava, brings up the ‘Mary in the black and white room’ thought experiment (A.K.A. ‘The Knowledge Argument’). Again, this explores the difference between simulated and true AI as it looks at the difference between simply acting as programmed, or understanding what you are doing and why. The chess computer comparison illustrates though a computer may be able to play a perfect game, it is simply carrying out programmed instructions. Unlike the chess computer orchatbot, true AI must be self aware.

Ava’s self-awareness is displayed early on when she makes a joke and fires one of Caleb’s comments back at him. The repeating of one of Caleb’s comments shows that, not only does she have a true understanding of the conversation on a larger scale, but she’s also aware of herself and is capable of manipulating herself and others. This self-awareness is also shown in Ava’s desire to be human. She puts on clothes to appear human, expresses her urge to go ‘people-watching’, and, at the end, as she takes the skin of the earlier models and becomes seemingly human.

The Grey Box

CCTV
[Credit: Universal Pictures]
As Caleb watches CCTV of Ava’s room, the idea of consciousness without sexuality is brought up. Caleb, of course, is suspicious that Nathan made Ava attractive as a sort of diversion tactic (using the analogy of a ‘magician’s hot assistant’). This is explored more in-depth as Nathan points out that there are no life forms, on any level, without sexuality or a need to reproduce. An AI without sexuality (or a ‘grey box’ as Caleb puts it) would operate in an entirely different matter to any living thing on the planet and though it could be intelligent, it might not be conscious as we perceive it.

Pollock
[Credit: Universal Pictures]
This is explored further in the concept of automatism. The idea of sexuality being ‘nature or nurture’ is discussed, with sexuality being somewhere in between random or logic-based, is analogized with a Jackson Pollock painting (above). The comparison is that Pollock had no plan. What he painted was not dictated by logic, but it wasn’t random either (similar to an improvised piece of music). This is something that a machine that is not ‘intelligent’ cannot do. Sexuality, as discussed in the film, is not based on a ‘points-based system that you then cross-reference’ or random, but is instead in that space of automatism in which you simply do something because you do, not because you should or should not.

Consciousness Without Intelligence

Ava and Kyoko
[Credit: Universal Pictures]
Though not a major character, Kyoko (above right) poses some of the film’s more interesting questions. Though not intelligent, Kyoko has emotion, such as when she stabs Nathan. Though she may not be intelligent, she is still seemingly conscious. This, silhouetted by the intelligence but lack of emotion in Ava, explores very subtly the idea that intelligence does not define consciousness and a machine could be defined as conscious without even an ability to communicate.

Ethical Uses Of AI

Kyoko (below) also brings up the issue of ethical uses of AI. Her inclusion explores the idea of AI enslavement. If you have created a being, do you own them and do they, if they have no emotion, deserve rights? Throughout the film, Nathan sides with the idea that the machines that he has created are his and, as they cannot feel, don’t need to be treated well.

Kyoko
[Credit: Universal Pictures]
Similar ideas are explored when Ava learns that she’s being tested and she asks ‘why is it up to anyone to turn me off?’. This leads to another conversation between Nathan and Caleb in which they discuss what will happen to Ava after the test. Nathan says that her brain will be downloaded, reformatted, and new bits of code will be added, resulting in the loss of Ava’s memories. This explores the idea of who you really are. It suggests that memories make a person as, though Ava’s body and basic framework and functions will remain, her memories will be lost and, therefore, she will be ‘killed’.

When asked why he created Ava, Nathan replies ‘wouldn’t you if you could?’. Though brief, this interaction explores the ethical issues of the creation of AI and the idea that, if you could create a consciousness, should you, and if they don’t have emotion, should their living conditions factor into your answer?

Not Human, Not Conscious

Get Down Saturday Night
[Credit: Universal Pictures]
A deleted scene looks at the idea of conscious that is entirely alien to us may still be conscious and explores that something does not have to be human to be defined as intelligent. Oscar Isaac (above right) said in an interview with Den of Geek:

“So in that scene, what used to happen is you’d see her talking, and you wouldn’t hear, but all of a sudden it would cut to her point of view. And her point of view is completely alien to ours. There’s no actual sound; you’d just see pulses and recognitions, and all sorts of crazy stuff, which conceptually is very interesting. It was that moment where you think, ‘Oh she was lying!’ But maybe not, because even though she still experiences differently, it doesn’t mean that it’s not consciousness. But I think ultimately that maybe it just didn’t work in the cut.”

Honestly, I’m glad that the scene did not make the cut, but it does look at many of the same questions, though in a less subtle fashion.

“Now I Am Become Death, Destroyer Of Worlds” – J. Robert Oppenheimer

Nathan
[Credit: Universal Pictures]
For me, Nathan is the most interesting character in the film. He struggles throughout with what he has done, will do, and is capable of doing. At the very beginning of the film, his struggle with power is expressed in his misquoting of Caleb – ‘I am not a man, I am a God’. This introduces an interesting dynamic in which Nathan feels that he has too much power and responsibility, (which leads to his chronic alcoholism as he cannot really function well without effectively ‘handicapping’ his brain with alcohol).

It is hinted at, in the scene following the second power cut, that Nathan may have orchestrated the power cuts himself to see how Ava and Caleb would act unobserved and to test if Caleb is trustworthy or not.

image002.jpg

Nathan’s struggle with the morals of his own actions is also shown as he quotes Oppenheimer. Both of the Oppenheimer quotes that are brought up are quoted, in turn, from Hindu scripture and explore Nathan’s belief that he may be creating beings only to kill them immediately after, causing him great guilt.

“In battle, in forest, at the precipice in the mountains, On the dark great sea, in the midst of javelins and arrows, in sleep, in confusion, in the depths of shame, the good deeds a man has done before defend him.”

Somewhere In Between

The final two points are less philosophy-based and look more at the technicality of building an AI and the idea of language being learned. First of all, in the below scene, Oscar Isaac’s Nathan shows Caleb that Ava does not use ‘hardware‘ as such, but instead uses a substance that constantly shifts and changes, much like the human brain. This is a really interesting concept as it overcomes the issue of the human brain. This is a really interesting concept as it overcomes the issue of the human-like brain being designed with an exclusively boolean data system (1 or 0, nothing in between).

Wetware
Wetware [Credit: Universal Pictures]
Secondly, it is brought up in Ava’s first session that language exists from birth and attaching words to meaning is all that is learned. This point interests me as it shows the opposite approach that today’s machines take to chatbots. Even simple machines today can easily put words to meaning but no machine truly understands the meaning of what they may say/type (relating back to Mary in the black and white room and the chess computer comparison).

I hope you liked this post and be sure to let me know what you think in the comments. Also, be sure to check out my sci-fi reviews at The Sci-Fi Critic, my review of Wes Anderson’s Isle Of Dogs, and my article on What Went Wrong With Annihilation.

Sources: Den of Geek

6 thoughts on “What Is Intelligence? The Philosophy Of ‘Ex Machina’ – ScreenHub Entertainment

  1. I recently stumbled (again) on this debate. We used to talk about it in SL a lot, a decade ago, and it was part and parcel on the EXTRO usenet group in the 90s, and it remains a compelling concern today, probably more so considering advances in self-learning algorithms. Just the other day it also came up on the reddits, and here was my respons – (slightly edited)

    Intelligence is a unpleasantly confusing concept. It’s a clumsy word. It’s so clumsy you’d literally would be unable to define it in a way that’s even remotely unambiguous. Similar words and phrases are “having a soul”, empathic, self-aware, conscious, sapient, sentient, intuitive, creative, “problem solving”, smart, “having cognition”, “having ambitions”, “having desires”, “being able to want” and many more.

    There’s the distinctive possibility that we as humans are a lot less sentient we assume we are. It may be that our ability to process meaning is a lot more narrow and less universal than we assume. It may be that we are in essence unable to come to terms with the concept of intelligence (or all of the above), define it, properly parse it and replicate it outside the human mind, and then make predictions about it when it does emerge in the world.

    I am concluding that what we make may or may not be labelled as “smart” or “intelligent”, but it sure as hell will be capable of doing things potentially, unimaginable, plausibly more unpredictable and thus frightening, than humans. Intelligence may be a vague conceptual field that overlaps with all the other equally nebulous blotches of concepts above. My point is that once we make what we’d call AI it can be a bunch of different blotches that are quite well capable of functioning meaningfully and independent in the real world. These things would provide services, and make money and solve problems, if they “wanted” (again, a nebulous concept) it.

    My take is that human intelligence being so specialized, it has massive emergent prejudices, narrow applications, and thus lots of stuff it can’t do that engineered or algorithmic or cobbled together or synthetically evolved functions or machines would be frighteningly good at.

    These things would very well be capable of do things previously regarded as impossible, and we as humans would be confronted painfully with our own striking non-universal intelligence that we’d be completely incapable how these things do what they do, even without these things being super-intelligent. It may turn out even fairly early steps in AI may resolve to produce wondrous machine minds doing positively inexplicably wondrous things, achieving inexplicable wondrous results.

    And even though there would be some long-term problems with “utterly unempathic” superhuman intelligences, we’d be a major hurt well before that, as people owning these new devices would then be able to

    *circumvent laws
    * lobby highly effectively in government
    * avoid (paying) taxes
    * create technologies and devices unimaginable before
    * engage in various forms of violence
    * manipulate human minds very effectively, and on a large scale

    That sweaty asshole Mark Zuckerberg became a billionaire with a stupid ‘facebook’ that got wind-tunnelled in to the biggest espionage monstrosity humanity has ever seen. In a decade. Facebook is now so important a monstrosity that it has completely escaped the ability of politicians, people, the law, science, democracy, the economy etc. to control it. And that’s just early days. There will be many more so monstrosities and their emergence will be quicker and quicker. Eventually you will have apps (tools, algorithms, machines, whatnot) that emerge in months, sweep the human sphere in days and have unspeakable impact on our world.

    It could get ugly, especially when ugly people retain exclusive rights to command these Djinn to unilaterally do their bidding.

    Liked by 2 people

Leave a comment