"The biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book."

The problem with Google's self-driving car are human drivers

#Driving

Thu, Sep 3rd, 2015 11:00 by capnasty NEWS

The problem with Google's self driving car is that it's too safe, and when faced in an environment of human drivers who don't quite follow the rules by-the-book, not only it doesn't know what to do, but it can lead into human-caused accidents. As the head of software for Google’s Self-Driving Car Project Dmitri Dolgov explains, “human drivers needed to be less idiotic.”

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.

It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book.

  1049

 

You may also be interested in:

In China, License Plates Cost More Than Cars
“A flexible fuel cell electric platform with autonomous capabilities.”
"Should they teach the cars how to commit infractions from time to time to stay out of trouble?"
A Porsche with 10,000 Bullet Holes
Audi's 24-Hours of Le Mans 2011