The robots are heading straight towards us and we are inviting them in to our homes, workplaces and vehicles. Not surprisingly, because they make our lives more convenient, our work more efficient, our buildings more sustainable and our roads safer. But at the same time we need to think about how we maximise the upside and minimise any downside. And we must decide who gets to make these decisions and then implement them. My column highlighted a few examples of where things weren’t thought through enough, and the unhelpful and harmful consequences of that.
Consider self-driving cars. There is no doubt that these will make our roads safer, and hopefully less congested. Obviously cars need to be coded to follow the rules of the road: stop at a red traffic light, indicate when turning, don’t exceed the speed limit and so on. Already you can see that self-driving cars will be safer than those steered by humans, with our ability to argue that the rules don’t apply to us, or that we were tired, or that we just didn’t see the cyclist.
But what about outlier incidents where choices have to be made? Where the car, or the driver, has to choose between the lesser of two evils in a split second, with only the information they have to hand and, in the human’s case, a moral compass.
One for the philosophers
There is a philosopher’s conundrum called the Trolley Problem that starkly highlights this. In brief, a runaway train is bearing down a track about to kill five people. You have to choose between doing nothing or pulling on a lever that diverts the train to another track, where it will only kill one person. Things get more complex: would you push a fat man off a bridge into the path of the train to save everyone? What if the fat man was a criminal that had set the whole thing up in the first place?
Google ‘Moral Machine’ to get a chilling illustration of how this thought experiment applies directly to self-driving cars. The point of the puzzle is to demonstrate the constant battle between our moral duty to not cause harm (standing by and doing nothing) and our moral duty to not do bad things (pulling the lever and directly killing someone through our actions). Even the philosophers haven’t got it figured out, with some arguing for the option that results in the least harm, while others argue that the consequences of our deliberate actions are what matters.
It’s a difficult one, isn’t it? And yet, as humans, we make these decisions all the time, and there are countless tales of heroism where humans deliberately sacrifice themselves to prevent others from being harmed. Somehow though it all changes when we need to start coding this thinking to teach the machines. For instance, a study showed that while in principle we would agree that a car should be programmed to sacrifice its passengers over passers-by, we would not want to get into that car.
Let the humans decide
Giving control to individual drivers is one option. A team at the University of Bologna designed a switch that the driver can use to choose between full altruism and full self-preservation, with varying degrees in between. Critics are concerned though that if we all choose full self-preservation, we’ll end up in some wild dystopia, and if we hedge and choose neutral, we are not actually giving the machines anything to go on.
Let the cars decide
Another argument says we should do the opposite and give all the control and decision-making to the machines. That they see the world better than we do with always on, 360-degree environmental sensors; that they learn better by aggregating the data of millions, or billions, of other cars; and that machines can be programmed to protect humans– thank you Isaac Asimov and your laws of robotics.
This thinking goes that due to these advantages, self-driving cars will manage to avoid the extreme cases outlined in the Trolley Problem. And they will have the information, context and experience to handle less dramatic situations. It strikes me that this argument does have some merit, specifically the ability to learn, via deep neural networks, what the best behaviour is. This could, assuming there are more people who follow the rules of the road than not, prevent the machines from picking up our bad habits. This could also mitigate coding for differing cultural characteristics.
I’m thinking here of the story of Korean Air that Malcolm Gladwell made popular. The airline was experiencing an unusual number of crashes, and it was discovered that the reason was that because of a cultural trait of deference, crew were using indirect language with each other during emergencies. Big airliners, however, need to be flown by a team of equals. They are not suited to extreme hierarchies. The solution was ingenious: all Korean Air flight crew have to communicate in English, thus cutting the ties with their encoded deference.
If self-driving cars have access to the hive mind of all other automated cars − in their manufacturer’s stable, or in the world, depending how the ecosystem plays out − these cultural tics that are unhelpful to making roads safer, could potentially be overcome.
I’m reserving judgement on this point of view though, especially because we still need to fix systemic issues with the way core technology is being implemented. Take facial recognition and the appalling way Google’s technology only recognised white faces in photographs. Transfer that into self-driving cars and lives could be lost.
Ethical guidelines
Another approach is the one taken by the German government last year. Acknowledging that self-driving cars are generally safer and that a government has a duty of care towards its citizens and so should get behind automotive automation, the government released a set of 20 ethical guidelines. These include that humans come before animals and property. That all humans are equal, so there is no hierarchy of survival: a child, an adult, an elderly person, a jaywalker and a criminal are all equal. Vehicles should be protected from hacking. Drivers must be identified, and a black box type computer store all information about the trip. However, personal data must be secured.
The German government also says that humans should have the final say in morally ambiguous situations, which is reassuring but problematic, as studies show that humans’ response times lag when they are not in full control throughout the drive.
In addition, pure-play driverless vehicles are going to be here sooner than we think. Driverless taxis, with no remote hands supervision, are being trialled in Colorado and Florida right now. As a word of warning though − in March, Uber halted its trials after one of its cars hit and killed a pedestrian. The car was driving in autonomous mode, with a human safety driver on board. At time of writing it was unclear what caused the tragic accident, but one can speculate that it is likely to be either mechanical or related to flawed decision-making by the car. However, for a change pilot error is unlikely to be the cause.
Unfortunately I don’t know what the answer is. Or if there even is one yet for both driverless cars and wider digitalisation. What I do know is that at this crossroad, we should be thinking very hard about society, who we are and who we want to be. We should be careful about who decides and who implements those decisions. And above all else, we should demand complete openness and transparency. We might not be driving anymore, but we should still be steering in the direction we want to go.
AUTHOR l Kevin Phillips CA(SA) is CEO of IDU Group