It will soon be easier for a self-driving car to hide in plain sight. The rooftop leader sensors that currently mark many of them are likely to be smaller. Mercedes vehicles with the new, partially automated drive pilot system, which puts its leader sensor behind the car’s front grille, are already visually indistinguishable from normal human-powered vehicles.
Is this a good thing? As part of our Driverless Futures project at University College London, my colleagues and I have recently completed the largest and most comprehensive survey of citizens’ attitudes towards self-driving vehicles and the rules of the road. After more than 50 in-depth interviews with experts, one of the questions we decided to ask was whether autonomous cars should be labeled. The consensus from our sample of 4,800 UK citizens is clear: 87% agree with the statement “If a vehicle drives itself it should be clear to other road users” (only 4% disagree, with the rest uncertainty).
We have sent a small group of similar survey experts. They were less agreeable: 44% agreed and 28% disagreed that vehicle status should be declared. The question is not straightforward. Both sides have valid arguments.
We could argue that, in principle, humans should know when they interact with robots. The argument was made in a 2017 report prepared by the UK’s Engineering and Physical Science Research Council. “Robots are manufactured artifacts,” he says. “They should not be misleadingly designed to exploit vulnerable users; Instead, their machine nature should be transparent. “If a self-driving car is actually tested on public roads, other road users should be considered subject to that experiment and given something like informed consent. Another argument in favor of labeling, this is a practical one. , That is – like a car driven by a student driver – it is safer to give a vehicle a large berth that does not behave like a well-practiced human-driven vehicle.
There are also arguments against labeling. The label can be seen as an abandonment of the responsibilities of innovators, suggesting that others should accept and incorporate the self-driving vehicle. And it can be argued that the new label, without a clear shared understanding of the limitations of technology, will only add to the confusion on the roads that are already replicated with disruptions.
From a scientific point of view, labels also affect data collection. If a self-driving car is learning to drive and other people know this and behave differently, this could contaminate the data it collects. Something similar seemed to be on the mind of a Volvo executive who told a reporter in 2016 that “just to be on the safe side,” the company would use unmarked cars for its proposed self-driving trial on UK roads. “I’m pretty sure people will challenge them if they’re marked by braking really hard in front of a self-driving car or putting themselves in the road,” he said.
On balance, the arguments for labeling are, at least in the short term, more persuasive. This discussion is more than just a self-driving car. It cuts to the heart of the question of how novel techniques should be handled. Emerging technology developers, who often portray them as disruptive and world-changing, are only right to paint them as growing and problem-free when it comes to knocking on regulators. But novel techniques don’t just fit the world as they are. They reshape the world. If we want to understand their benefits and make good decisions about their risks, we need to be honest about them.
To better understand and manage the deployment of autonomous cars, we need to dispel the notion that computers will drive just like humans, but better. For example, management professor Ajay Agarwal argues that self-driving cars basically do what drivers do, but more effectively: Microphone. નેAnd the data comes, we process the data from our monkey brain and then we do actions and our actions are very limited: we can turn left, we can turn right, we can break, we accelerate Can.