Intel Corp. has developed a system it claims will ensure that self-driving vehicles can’t cause accidents where they are at fault, in a bid to overcome public doubt about autonomous technology and help speed adoption of driverless cars on the road.
The world’s largest chipmaker is publishing a set of standards, based on mathematical formulas, that will govern the behaviour of robot cars and trucks. If adopted, Intel argues, they will bring certainty to questions of liability and blame in the event of an accident.
“Any useful autonomous vehicle is going to be involved in accidents,” said Dan Galves a vice president of Mobileye, a maker of autonomous vehicle technology that Intel bought earlier this year. “One thing that is clear is that the public is going to be a lot less forgiving of accidents that are caused by machines.”
Intel is trying to come up with a framework that will help prevent the potential chaos of putting machine-driven vehicles and those piloted by unpredictable humans on the road at the same time, a necessary step on the path to a future where steering wheels become obsolete. The company has taken descriptions of behaviour and circumstances that were involved in almost all accidents tracked by the U.S. National Highway Traffic Safety Administration and come up with mathematical models to create a measurable “safe state” for autonomous vehicles. The standards, if endorsed by the automotive industry, its suppliers and regulators, would also be the basis of software in the vehicles that make sure the rules are followed.
To illustrate what Intel has in mind, under the guidelines a robot vehicle would move past parked cars at a speed slow enough to make sure it could stop in time to avoid a pedestrian who suddenly stepped out into the road. That calculation is possible because we know the maximum speed at which a human can move and can model it, according to Intel. Similarly, computers can easily calculate the safe stopping distance to a vehicle in front and make sure the vehicle they’re piloting stays far enough away. If an aggressive human driver cuts in front of the robot car and causes an accident, the standards would clearly show whose fault it was, even if the robot car rear-ended the other vehicle.
Intel is arguing that the current path that the industry is following won’t work or will take too long. Slow-moving, ultra-cautious vehicles are of limited use and aren’t that safe, because they clog the roads and don’t fit in with the flow of human-driven traffic. Attempts to prove that self-driving cars are safe by putting them on the road, having them learn from experience and then measuring how few accidents they have compared with driven vehicles is also ineffective, partly because any accident attracts huge amounts of public attention, Intel said.