But what was the cause? Technology or human error?
In the wake of Uber's self-driving car crash debacle which resulted in the death of a pedestrian a few months ago, whenever the words "self-driving car" and "crash" are heard, people immediately think the worst. But like always, details matter.
According to Bloomberg, Apple has disclosed in a filing with the California Department of Motor Vehicles that one of its autonomous test vehicles was involved in a crash. The modified Lexus SUV was in self-driving mode at the time when it was suddenly rear-ended by another vehicle as it was about to merge onto a highway in the San Francisco Bay Area.
Both vehicles suffered some damage but no human injuries. So, what can we take from this? The cause was human error, nothing more than a fender bender. On August 24, the driver of a Nissan Leaf was traveling at about 15 mph when it rear-ended the Apple's self-driving Lexus SUV, which was moving at less than 1 mph.
Because it wasn't the self-driving vehicle that was the cause of the crash, this story went mostly unreported. But think about it. What if the vehicles' roles were reversed? The headlines would be screaming 'Apple Self-Driving Car Rear-Ends Another Vehicle.' So it's no big deal when a human driver is the cause of a crash, but it's major headlines when a computer is to blame?
There are tens of thousands of car accidents daily caused by human error and yet people suddenly point fingers at some algorithm in self-driving software that can be easily fixed so that it'll never make the same mistake again? Humans are consistently bad at making the same driving errors again and again, and there is no type of software fix. So the next time a self-driving car crashes or is involved in a crash, bear in mind all of the unreported car accidents that occurred that very same day all caused by humans making mistakes.