"The biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book."

The problem with Google's self-driving car are human drivers

#Driving

Thu, Sep 3rd, 2015 11:00 by capnasty NEWS

The problem with Google's self driving car is that it's too safe, and when faced in an environment of human drivers who don't quite follow the rules by-the-book, not only it doesn't know what to do, but it can lead into human-caused accidents. As the head of software for Google’s Self-Driving Car Project Dmitri Dolgov explains, “human drivers needed to be less idiotic.”

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.

It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book.

  1198

 

You may also be interested in:

"If we're 100 years into the automobile era, it seems pretty inconceivable that the car as we know it is going to be around for another 100 years."
"Should they teach the cars how to commit infractions from time to time to stay out of trouble?"
A License to Scream
Teaching Dogs How to Drive Cars #DrivingDogs
How Mario Kart Saves Lives