It’s time to talk about the lesser-known savior of the autonomous car: the sensor suite. Read on to find out more about just how many eyes are included with a driving AI.
We’re so close to having self-driving cars on the market. Of course, the closer we get to a robot-designated driver, the more flaws we find in that approach.
Don’t get me wrong–everything has flaws, and we’re bound to see them in any fancy, world-changing tech. It’s just that the flaws around the idea of putting an AI in the driver’s seat are, well, pretty darn interesting.Autonomous cars confused by edge cases--who creates them?Click To Tweet
Have you ever caught yourself thinking about what hackers will do to self-driving cars? Beyond the AI simply failing to avoid an accident, hacking seems like the biggest danger. Yet, how would it happen?
As it turns out, there are a few ways.
For starters, don’t get too worried. The complexity of these self-driving systems is immense, and developers have all kinds of tools to keep the AI from getting fooled. Mostly this comes in the form of sensors. Many, many glorious sensors.
So if someone wants to hack MY self-driving car, they’ll have to take into account a bunch of moving parts. After all, many parts comprise its computer vision abilities. It’s unlikely, but that being said, it’s still possible, and fooling the sensors is probably the best vector for attack.
But let’s not get ahead of ourselves. First, we need to talk about the sensors themselves.
Autonomous Cars Confused
Self-driving cars have more sensors than you might think. Actually, it’s easier to say that whatever sensors you can think of are probably integrated into a self-driving car.
The reasons are many, but one good one has to do with the above image. What you’re seeing is a decal applied to the back of a vehicle. To the eye of a driving AI, however, it looks just like three bikers.
It’s called an “edge case”, which is a fancy way of saying “unforseen problem”. This particular “edge case” was caught by researchers at a company named Cognata.
To combat problems like this one, autonomous cars have every kind of sensor you can think of. That’s everything from radar, to lidar, up to an including the camera that everyone thinks of as the AI’s “eye.
To put this into perspective, Tesla Motors got some criticism with its early test vehicles. Those vehicles only used radar, camera, and ultrasound sensors. One day, one of them couldn’t tell the difference between a truck trailer and a bright blue sky, leading to the death of a driver.
So, by cross-referencing every sensor we can stuff into an autonomous car, we’ll have a much better chance for safety.
Which is great, sure, but this leads us to another problem. Given the right input, you can hack a self-driving car.
Hacking the Sensors of Self-Driving car
Self-driving car hacks are a nightmare for a budding industry. Before they are even out on the road, people are figuring out how to confuse and exploit them.
Take the above photo, for instance. Let’s face it, street signs get defaced.
Sometimes it’s funny, sometimes it’s crude, but it can happen just about anywhere. By adding “love/hate” to that stop sign, a driving AI might be confused into recognizing it as something else. And that’s terribly unsafe, for obvious reasons.
By adding “love/hate” to that stop sign, one driving AI confused it as a speed limit notice.
We can see the potential problems with this, but there isn’t a definitive answer to solve them quite yet. It may come down to how sign defacement is enforced, or how street signs are made. More likely, however, it will probably get solved by an intrepid team of researchers who can program an AI with enough contextual verification to not get fooled.
I’m hoping for the latter, as the greatest trend in AI is teaching them how to make judgment calls. If they get better at making judgment calls (as they currently are, every day), we won’t have to change our infrastructure to accommodate them.