IS

Make
Lexus
Segment
Sedan

Self-driving cars are lots of things. They are cool and intriguing. They're certainly revolutionary. But one thing they are more than anything else is uncertain. Google's self-driving Lexus is a step, possibly even a leap towards a future filled with automated driving, but there are many kinks that need to be worked out. In February Google's Lexus crashed into a bus. As the car was pulling away from a curb it assumed the bus would slow down. If we break this down logically, it doesn't bode well for Google's Lexus or for other autonomous cars.

Humans aren't consistent enough to any degree for a computer's assumptions to be accurate. Then of course there's a major ethical dilemma to consider. In the event of an accident when an autonomous car has to weigh life and death scenarios, what will it do? The logical thing is to minimize casualties, but it means the driver voluntarily got into a car that could potentially put them in danger. What this comes down to is a thing called general intelligence, which is human intelligence. AI (artificial intelligence), which is what the driverless cars are using, can only operate within its programming. Unfortunately the ethical dilemmas don't stop with the car deciding who should live between its driver and, say, a school bus full of children.

Here's another scenario: While driving on a relatively unsafe road, like a two-lane highway, what's a self-driving car going to do when a deer runs out into the road? Is it going to swerve and put everyone else in danger or punt the deer? If it's Google's car, do its programmers come out and say they value a human life over a deer's? If that's the case they probably need some lawyers at the ready. This dilemma also applies on a mountain road, where with its twists and turns the self-driving car must decide, when the situation is presented, whether to hit a tree or go off a cliff. Driving fast doesn't need to be a factor for these scenarios to happen. All it takes is some wet pavement, a blown tire or, again, some animals running in front of the car.

Finally, there's the big question of when the pilot of the driverless car is in a crash with another car. The question for the car becomes is it "my" pilot who survives or theirs? Which one makes the most sense? Since a self-driving car operates within its programming, logic takes over, and suddenly pilots of self-driving cars are subject to a HAL-like decision-making process. Wasn't one of the big reasons for making self-driving cars to create a safer driving environment, in the way of limiting DUIs and accidents? There is another side of the argument, and that is a driverless car shouldn't need to have that question of ethics put to it. Why? Because human drivers don't face these questions during a license test.

Perhaps the solution is for every self-driving car to have a manual override. Either that or not a single driverless car should be allowed onto the street until every single car is driverless, and therefore predictable. Or maybe the solution is to not give anyone any more excuses, and just eliminate the thought of a driverless car altogether.