A 2012 demonstration of the driver assistance system Mobileye (Baz Ratner/Reuters)

We should investigate the many ways hackers could disrupt self-driving cars before we begin deploying them.

Imagine this future scenario: Self-driving cars form an orderly procession down a highway, traveling at precisely the right following distance and speed. All the on-board computers cooperate and all the vehicles travel reach their destinations safely.

But what if one person jailbreaks her car, and tells her AI driver to go just a little faster than the other cars? As the aggressive car moves up on the other vehicles, their safety mechanisms kick in and they change lanes to get out of the way. It might make the overall efficiency of the transportation lower, but this one person would get ahead.

This is but one of many scenarios that Ryan Gerdes of Utah State University is exploring with a $1.2 million grant from the National Science Foundation to look at the security of the autonomous vehicle future.

"The designers of these systems essentially believe that all of the nodes or vehicles in the system want to cooperate, that they have the same goals," Gerdes said. "What happens if you don't follow the rules? In the academic theory that’s built up to prove things about this system, this hasn’t been considered."

While Google is out to create a fully autonomous vehicle some years into the future, the major carmakers are taking more incremental steps toward autonomy. Nissan, Volkswagen, Daimler and others all have programs. Just this week, Cadillac announced that it would include "super cruise" that would allow for "hands-free" driving on highways in a 2017 car.

The race to come out with self-driving technologies has drawn in regulators in several states, but it's hard to evaluate the claims of the carmakers or anyone else without independent analysis about the vehicles.

All the autonomous vehicle makers have downplayed security concerns. Chris Urmson, Google's self-driving car project lead, provided a reasonable, but largely boilerplate answer to a security question at an event earlier this year. "There is no silver bullet for security and we're taking a multilayered approach," Urmson said. "Obviously there is encryption and very narrow interfaces or no interfaces at all. You do your best to make your outside layer secure and then make your inside layer more secure."

To translate: Urmson is saying that they don't want hackers to get into any of the car's systems (the outer layer), but they also don't assume that no one will ever get in. So, the access to the controls of the car would be further quarantined from the other networked components that someone might gain access to.

But a straight up hacking is not the only kind of threat that Gerdes is studying with his NSF grant money. "If you just look at at traditional threats to a computer, you’re going to miss out on a lot bigger threats," he said.

What he's fascinated by is the way that bad actors could use the self-driving cars' algorithms against themselves. The algorithms that guide these cars—at least now—are fairly "deterministic" as he put it. A given set of inputs will yield the same outputs over and over. That makes them prone to manipulation by someone with knowledge of how they work. He can spin out scenario after scenario:

  • "What happens when you have two advanced cruise control vehicles and the one in front starts accelerating and breaking such that the one behind it starts doing the same thing in a more amplified fashion?"
  • "We’re looking at the collision avoidance systems. They rely on radar. We think we can manipulate radar sensors to some extent. Is it simple for an attacker to create an obstacle out of thin air?"
  • "Auto manufacturers always maintain the proper spacing in adaptive cruise control. You might get interesting effects if [someone] crafted certain inputs or misbehaved in a certain way so they create a very large traffic jam."
  • "If I’m a shipping company and I want to slow down the competition... I can take advantage of their sensors and keep making their cars brake and accelerate. We’ve already demonstrated in theory that it’s possible."

In all of these circumstances, they're trying to understand how the algorithms that guide autonomous vehicles could be exploited by hackers or other bad actors. They don't have access to the self-driving cars that car makers are working on, so to test their ideas in the field, they're using BattleBots to stand in for full-size cars and trucks. They program the BattleBots using algorithmic logic that they imagine the car companies are using based on published academic literature.

Because of the way the car companies work—building their specialized systems with components from large suppliers like Bosch—Gerdes' team can often get the core parts that make up the self-driving car systems.

"Experiments are really hard in this realm, but we think we have a decent analog," Gerdes told me. "We can accelerate a lot faster than most cars and they are also made for battle, so we can crash them together."

Obviously, everyone building autonomous vehicles has a major incentive to get the security issues right. But so do credit card companies and Target and Apple—and they have all experienced major problems with security over the last few years. And, Gerdes said, the traditional car companies have not inspired confidence in the security research community with some of their designs.

A 2010 paper found all kinds of security flaws in a modern automobile, including headslappingly simple stuff like allowing the car's control system to be accessed through the radio controller. Install a hackable aftermarket radio and some malicious entity could take control of one's brakes.

"Why would you design a car to work like that?" Gerdes asked. "And these are the same people who are going to be making our automated vehicles?"

This post originally appeared on The Atlantic.

About the Author

Most Popular

  1. Smoke from the fires hangs over Brazil.

    Why the Amazon Is on Fire

    The rash of wildfires now consuming the Amazon rainforest can be blamed on a host of human factors, from climate change to deforestation to Brazilian politics.

  2. a map of London Uber driver James Farrar's trip data.

    For Ride-Hailing Drivers, Data Is Power

    Uber drivers in Europe and the U.S. are fighting for access to their personal data. Whoever wins the lawsuit could get to reframe the terms of the gig economy.

  3. Graduates react near the end of commencement exercises at Liberty University in Lynchburg, Virginia, U.S.

    Where Do College Grads Live? The Top and Bottom U.S. Cities

    Even though superstar hubs top the list of the most educated cities, other cities are growing their share at a much faster rate.

  4. An aerial photo of downtown Miami.

    The Fastest-Growing U.S. Cities Aren’t What You Think

    Looking at the population and job growth of large cities proper, rather than their metro areas, uncovers some surprises.

  5. A man sleeps in his car.

    Finding Home in a Parking Lot

    The number of unsheltered homeless living in their cars is growing. Safe Parking programs from San Diego to King County are here to help them.