I am currently having an unexpected extended holiday from work, being between jobs and waiting for a visa to start the new one, so I have had lots of time to immerse myself in popular culture to keep occupied. I have frequented the cinema, watched a lot of TV, read many books, listened to the radio (though mainly Classic FM, however much you think that fits in ‘popular culture’, though it is fairly mainstream as classical music goes), and played computer games.
Ex Machina: Ethical Questions
The first film I’ve seen at the cinema that has really caused some serious thinking is Ex Machina. So in a sort of theology of film kind of way, it raised big ethical questions. I should probably warn about spoilers at this point – so if you are planning on seeing the film, which you absolutely should, come back and read this later. It won’t spoil it to say that it is about Artificial Intelligence, or AI. That much is clear from the adverts. But what actually happened had us talking about it all the way home.
So, a brief summary of the plot before the issues that got us talking (don’t say I haven’t warned you about spoilers): Caleb wins a competition, goes to the fortress-like estate of head of “Bluebook” corporation, Nathan, and is asked to conduct the Turing test on an AI named Ava. She passes, and with Caleb’s help (by manipulating him), escapes, killing Nathan in the process.
First, is the film a critique of Google? Well, the easy answer is obviously yes. The organisation sounds very similar to it when first described – a search engine that has branched out into other technologies. It is revealed later that it collects data without people knowing, including from their cameras and microphones on their phones. So what with the collecting data and the development of AI (I’m thinking of the electronic ‘dog’ Google invested in that can walk on any terrain). So it got us talking about the sorts of data that can be – and probably is – collected about us. And we concluded that there are cases where we would very much like our data to be available – for example, our GPS position from our mobile phone is we were kidnapped – but also about how easy it may be for hackers to access webcams and microphones. (There is another good programme drama about trolls hacking webcam and microphone, mobile etc called Cyberbully that was on Channel 4). Nathan uses Caleb’s data to make the AI Ava more suited to him.
Then the film plays with what you think you know about yourself. At one point in the film, Caleb begins to question whether he is the AI that is being tested, and cuts his arm open to see if he is the machine. We both also questioned whether Nathan was the AI and it was a blind test, which is not the case. His housemaid does turn out to be AI, however – this is obvious from fairly early – but turns against Nathan in the end too.
And now for the really big question. The culmination of the film sees Ava released into the big bad world, having murdered Nathan and left Caleb locked in Nathan’s room. We spent most of the journey home talking about whether Ava had any sort of sense of morality, whether she would feel guilty about killing Nathan, whether she would return for Caleb or if she was just simulating liking him in order to escape. This led to a further discussion about survival of the fittest, and whether Ava would be able to survive. So, Ava might have consciousness, she might have passed the Turing test, but does she have conscience? We think not, and perhaps this is what will continue to separate humans from AI.