The approach is simple: a trolley goes down by a slope uncontrolled and in its journey there are five people tied that cannot escape in any way; the only thing that could save them is to activate a lever that will cause a change of track of the car causing the death of a person. What should the person responsible for activating the device do? Does it make more sense to save five human lives than only one? Should he do the same if he knew that those five people capable of being saved are condemned for terrible crimes? Who has the authority to make decisions of such caliber? How much is a human life?
The Trolley Dilemma 1 is very popular in the field of philosophy and ethics because it poses an extreme situation that confronts many of the most sensitive areas of our values. Making decisions that definitively affect the life of another human being places us at a point of extreme difficulty where everything learned throughout life affect us as individual beings and, at the same time, everything that humanity has built during centuries in relation to the value of life itself as well. In our days, The Trolley Dilemma angle has unexpectedly experienced an explosion of popularity linked to the digital revolution and artificial intelligence.
The self driving cars are a reality that is pending only of the technological evolution and the legislative reforms to invade the streets and highways of the whole world. Vehicles that, with the corresponding destination coordinates, transport their passengers from a starting point to a destination point following the data provided by thousands of connected devices: orbital satellites, proximity sensors, traffic cameras or weather stations. This huge amount of data is processed in unthinkable units of time for a human being and directs the behavior of the car before any eventuality. So far, everything is fine. But what would happen at the moment when this little artificial intelligence should have to take a decision before an imminent danger? It is easy to imagine the following situation: the autonomous vehicle has run out of brakes and is going to collide mortally, inside the car there are two people sitting and in front there is a person crossing the road with a baby. Should the artificial intelligence decide to save the life of the occupants of your vehicle? Or, conversely, should you try to save a life that could potentially have more future? What happens if the people who cross the street are two young researchers in medicine? Might they have more value than, for example, two retired people of advanced age? The Moral Machine experiment 2, carried out by a research group from the MIT Media Lab, proposes a dizzying number of different variables that all arrive at the same point: machines making vital decisions. The challenge is especially pertinent because the scientific community no longer asks if a day will come when a machine decides on the future of human lives; the only question is when that day will come and if we will be sufficiently prepared.
Literature gave a precocious, brilliant and creative response to manage the relationships between intelligent people and machines through the “Three Laws of Robotics” by Isaac Asimov 3. The proposal was as simple as effective: i. No robot can harm a human being by action or by omission; ii. Robots must obey the orders of human beings unless they conflict with the first law; iii. Robots must protect their existence as long as they do not conflict with the first and second law. However, these laws enter into a fatal conflict when situations arise in which all options fail, as in the Trolley Dilemma. Often, the reactions linked to the ethical and moral instincts are marked by the specific circumstances of a particular moment with millions of variables in game.
This is just the beginning of a complex and risky journey in the research on ethics and artificial intelligence. Following the trail of Asimov’s work and great literary and audiovisual science fiction creations of the twentieth century, the most
widespread image to imagine the future of artificial intelligence is that of the humanoid machine with the capacity to make its own decisions. Robots. Maria 4. R2D2 and C3PO 5. Terminator 6. Short circuit 7. Tomoko 8. Nothing to do with the reality we are already living. The traditional boundaries that delimit the communicative spaces between us and our interlocutors will melt until they disappear; and in the narrative field, the leap proposed by André Bazin 9 locating the narrative margins outside the rectangle in search of true continuity can be surpassed to unimaginable limits.
We are heading towards a world where the rectangle will lose its length reigned as a container of information. We have been living under the control of this geometric form for centuries. Rectangular artworks, rectangular books, rectangular posters, rectangular cinema screens, rectangular computers, rectangular mobile phones. Most textual, visual or audiovisual information supports that we use have the same form. Virtual reality and, especially, augmented reality will change in a very short time and radically the way we visualize information. The digital supports of contents are prepared to surround us, breaking the barriers of right angles to combine with the physical world or move to a new dimensionality of spacetime. From a simple weather forecast, a report of a warlike conflict to a scientific narrative about the corners of the universe. In this new digital universe, where distances do not exist and physical presence is not relevant, metal robots with human shape become objects of antiquarian even before being created. Artificial intelligences take unexpected forms in this imminent future, their form can even be non-form.
Beyond the domain of the visual, the future also poses us as auditory and vocal. The film Her 10 used the romantic relationship of a man and his digital voice assistant to crudely visualize the impact of the humanization of artificial intelligence devices. You do not need an entity or a physical space to be able to act in a human way with a digitally intelligent existence. The seemingly funny conversations of children with virtual assistants like Siri or Alexa, are transformed into something really disturbing when we realize that their ears in the form of microphones installed in the mobile phone, in the smart TV or in the digital home assistants, are listening constantly to our conversations, filtering the most relevant data, learning our behaviors and creating patterns to be able to anticipate our desires and needs.
Artificial intelligence is going to influence the lives of the thousands of millions inhabitants of the planet decisively over the next decades. Machines can learn, improve and make decisions. But the code on which they carry out their activities will be, for the time being, written by humans. The sensitivity of the artistic can play an especially relevant role during this beginning of the non-human intelligence revolution. The ethical and moral values that will define the behavior of these new intelligences depend on the people who create them, design and program them, not on the machines themselves. The ability to generate critical thinking, empathy, tolerance, respect and, ultimately, the humanity of that which is not human will only be possible if today artists accept the challenge of making visible the questions that no other area of society can dare to raise.
1 Philippa Foot, The Problem of Abortion and the Doctrine of the Double Effect in Virtues and Vices (Oxford Review, 5, 1967.) 2 MIT Media Lab, The Moral Machine, http://moralmachine.mit.edu/ 3 Isaac Asimov, I, Robot (Gnome Press, 1940). 4 Fritz Lang, Metropolis (Universum Film AG, 1927). 5 George Lucas, Star Wars (Lucasfilm, 1977) 6 James Cameron, Terminator (Hemdale Film Corporation Tradueix, Pacific Western Productions Tradueix, 1984) 7 John Badham, Short Circuit (The Turman-Foster Company, 1986) 8 Liu Cixin, The Three-Body Problem; 三体 (Chongqin Press, 2008) 9 André Bazin, Qu’est-ce que le cinema? (Les Editions du Cerf, 1958) 10 Spike Jonze, Her (Annapurna Pictures, 2013)