Monday, April 17, 2017

"Ex Machina" ~ Embracing the Experiment

Note: This article contains spoilers for the entirety of the film. If you do not want to be spoiled, please do not read on.

The concept of artificial intelligence has been one of my favourite topics since I was old enough to create stories and world-build. One of my earliest attempts at writing covered that beloved old saw of science fiction: what if artificially-intelligent beings really did care about us? What if they had motivations beyond their programming? What if they could be, all things considered, human?

The 2015 flick Ex Machina is built heavily on that concept, and by the end of the first act it seems to be leading us toward that beautiful fairy-tale notion: the idea that a robot could "grow a soul" and have desires, yearnings, needs, wants, and loves. We see Ava -- what a name for a pretty AI girl trapped underground who seems only to want to escape -- looking quite human, standing at a crosswalk in an inspiring bright light, watching the world go by as she had said early in the film she wished she could.

And in that moment we feel something. We feel her humanity. We feel that freedom she yearned for, that her fellow experiments tore themselves apart for.

But more importantly in that moment... we, like Caleb, become the subject of the experiment. And we were fooled.

Back in my post about morality in video games, I discussed the true nature of artificial intelligence. While it's a lovely suspension of disbelief to think of computers and androids with souls, that is far more science fantasy than science fiction. This even applies to harder stuff like Black Mirror, where AI tends to be the victim of human whim. It's key to remember here that that series is actually a subversion of sci-fi -- where by-the-book science fiction is man vs. technology, Black Mirror is man, pure and simple, as reflected in technology.

So what, then, is the real nature of artificial intelligence? It's actually quite boring: AI is information fed into a computer and then given the ability to progress down a tree using that information at the speed of human thought or better. And in that respect, we work with artificially intelligent machines every day. We have to or we'd never get anything done. AI is not a near-future development; it's in your pocket. It's in your video games. It's literally everywhere. And all it is, is the ability to run through a set series of a solutions to a problem very very quickly until the computer smacks into the one that works.

What most people are hoping for is what we might call Artificial Intuition: the ability to make a decision based not on dashing through everything it knows in order, but based on observation. It's the problem with things like autonomous cars, where it might have to choose between killing the passenger and killing a pedestrian. Without some as-yet-undiscovered element of programming, a computer cannot simply look at a problem and, like a human, go, "Oh, I've dealt with this before, I should do this." Even our buddy Watson operates via Artificial Intelligence -- not Artificial Intuition. And even that intelligence is limited.

In the aforementioned video game article, I noted that even morality-driven games like Undertale and Papers, Please are driven not by a universal morality, but by the ethics of the programmer. Even "mercy good, killing bad," which seems quite universal, came from the mind of one person and was programmed into the game in question. And that is the key thing to remember about artificial intelligence -- the computer cannot think or do anything that was not programmed into it. Even if it learns new words and concepts from other people, its core programming has been arranged a certain way. Think of it like those irritating "invisible walls" in video games that don't let you explore areas the programmer doesn't want you in.

Now let's talk about Nathan's experiments.

Nathan, despite playing the role of a Silicon Valley beer-guzzling robot-fucking tech-bro to the hilt, is not a fool. We know this. He's a genius. A genius strung up by his own hubris, but a genius nonetheless. We know by the third act that every aspect of Caleb's visit was planned, and we get slow-burn indications of just how deep Nathan's experiment goes.

To truly understand what he's up to, I have to take my NASA brat hat off and put my writer hat on. While an underground bunker with selectively locked doors is a perfect sci-fi setting, it's also a perfect model of Nathan's psyche. I'm referring to my old fave, Jung's concept that houses are representative of the owner's mind. Special rooms Caleb is allowed into and out of, dark secrets Caleb can only access when Nathan is blind drunk... it's an extremely basic, but well executed, example of the trope.

And it also serves to remind us that we, for the entirety of this movie, are in Nathan's World. Everything here is controlled by him. Everything we see, we see because he's decided we're allowed to see it. Everything. Even Ava. Even -- and a lot of people aren't going to like this -- the tragic AIs that came before her.

We learn late in the film that Ava's appearance, her kindness to Caleb, her flirtiness, were all programmed into her. Caleb was bait, and Nathan's test was to see if Ava could reason her way out of a trap, while simultaneously appearing human to Caleb. But here's the thing. One of the core points of this movie is that Nathan is in charge. That everything is under his watchful eye. And yet we're seeing that Ava and his previous projects all desire freedom. How can that be?

One potential reading is that this is a sign that Nathan is not as in control as he thinks, and that even his control is not absolute. That there are some things that he cannot account for. But then why set up the big reveal of the movie to be that absolutely everything was under his thumb?

The more likely reading is that Nathan programmed in the desire for freedom.

Why, though? Why do that to himself? In-world, there are a few potential reasons. It could have been to give him something to test -- knowing that test subjects are more motivated by a want or a need, and thus realizing these AIs would have to want something in order to give good results. Hell, it could have been a kink. We already know he only makes "operational" pretty fembots. Maybe he's just that big a sleaze and wants a girl to control. Or maybe he wants them to escape, to set into motion the AI-fueled future he described to Caleb.

On a metatextual level, though, we have to remember what I said before. AI contains the morality of the programmer. Ava and Kyoko and the others reflect Nathan, his beliefs, his needs and wants. They are equipped with his way of thinking. Perhaps, once we scratch the surface, we're actually seeing the story of a man desperate for an escape. From what? We'll never know, as he's been rather conveniently silenced.

Now that we have that laid out, let's go back to the movie's old buddy, the Turing Test. And let's remember what Nathan and Caleb keep on coming back to: the true subject of the Turing Test.

As stated in the movie, the Test doesn't determine whether or not a computer is "like a human." Rather, it tests whether it operates in such a way that a human believes they're speaking to another human. The human is the mouse in the maze. Not the computer. And even though we have this distinction drawn for us throughout the movie, we forget until the truth of Nathan's test becomes clear.

The test in the movie, as we learn, was never to see if Ava is a convincing human. It was, to put it bluntly, to see if Caleb can be tricked into thinking that Ava somehow has a magical robot soul. Did it work? Yes. And it worked on us, too.

Think of the final span of the movie. Think of Ava and Kyoko as they take down the creator who trapped them. Think of Ava pensively examining the old robots, creating a human form for herself. Think of her with her flowing hair, looking like a "real girl," stepping out into the real world, standing at the crosswalk with all of life and the real world ahead of her, free from her bunker, free to explore.

And then remember that there is absolutely nothing in her computer brain that wasn't put there by Nathan.

Remember that we spent an entire movie being led slowly down a path that ended in the realisation that everything is under Nathan's control. Even Nathan's death was under his control -- he did, after all, create robots whose prime directive was Freedom. Just because Ava killed him doesn't mean she was operating outside his programming; it just means that his morality, as represented in her programming, dictates that it's okay to sacrifice others to get what you want. (Which, from what we've seen of him, is not at all surprising.)

If your heart went out to Ava as she experienced the human world in human form? Congratulations. She worked on you. As we witness Caleb being tested, we miss the very real fact that we are the test subjects. The fact that, even after knowing her true nature, many of us are willing to put aside everything we've just been told -- just as Caleb did -- and give in to the fantasy of her potential humanity.

And if you were a little bit afraid? You'll probably survive longer than I will when the robot uprising happens.
This entry was posted in