Engineers have proven they can create cars and trucks that move without drivers at the controls, but confidence in early autonomous vehicles has been shaken in the wake of some high-profile collisions.
The “driver” of a Tesla with autonomous controls slammed into the side of a white trailer while he was reportedly watching a movie at the time of his fatal crash. An autonomous Uber SUV killed an Arizona pedestrian while the driver was streaming an episode of The Voice on her phone. Both situations would describe distracted drivers in a traditional context, but the vehicles themselves had been fitted with different levels of sensors and automated systems meant to avoid crashes.
Maybe it’s one of the reasons why people were quick to leap out of the way of an autonomous shuttle as it rolled up and down a laneway during the recent Movin’On summit on sustainable mobility.
“There are some trust kind of questions,” says Hadi Zablit, senior vice-president of an alliance between Renault, Nissan, and Mitsubishi. “Maybe we need to position this more at the expectation level – what is the definition of safety?”
It seems like an obvious thing to define. Safe drivers don’t hit things, hurt people, or run off the road. But the issue is more complex than that.
While autonomous systems need to perform as well as a very good driver, they won’t be perfect, Zablit explained during a Movin’On panel discussion on the question of trust. “I would not say a ‘flawless’ driver.” Instead, he said it’s about finding technology that not only anticipates what others might do but can operate on roads where some drivers comply with established rules and others don’t.
Expecting zero accidents in that context could be unrealistic, he said.
While there’s been plenty of progress made in the underlying artificial intelligence (AI), there are limitations, said Pierre Schaeffer, chief marketing officer of Thales Group consulting firm.
“Today the big recent hype in AI is very much based on what you would call data-based AI,” he said. “It’s great at sensing, it’s not great at sense making.”
In other words, the artificial intelligence can spot a threat, but struggles to learn ethics relating to what it should do.
To compound matters, those who own such vehicles might not understand why they do what they do.
“We don’t operate well as an empathetic society,” said Boston urban planner Kristopher Carter. People don’t always accept the fact that the need to move one vehicle is more important than their own trip.
“The original design point of the car system as we know it today is the individual and individual desire, and that doesn’t bear well to an easy path toward autonomy,” Schaeffer said. It’s one of the reasons that he believes early autonomous vehicle applications will involve trains.
In the meantime, can the trust in autonomous vehicles be restored? Those inside such vehicles, for example, need to feel safe but also know the system is assertive enough to move through heavy traffic, Zablit said. “You don’t want your passenger to be focused on monitoring the car. You want him to enjoy his ride.”