"The biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book."

The problem with Google's self-driving car are human drivers

#Driving

Thu, Sep 3rd, 2015 11:00 by capnasty NEWS

The problem with Google's self driving car is that it's too safe, and when faced in an environment of human drivers who don't quite follow the rules by-the-book, not only it doesn't know what to do, but it can lead into human-caused accidents. As the head of software for Google’s Self-Driving Car Project Dmitri Dolgov explains, “human drivers needed to be less idiotic.”

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.

It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book.

  1559

 

You may also be interested in:

"Smart artificial-intelligence-fueled traffic signals that adapt to changing traffic conditions on the fly."
Cellphone Use While Driving Endeangers Relationships, Not Safety
10 Cars Way Too Far Ahead of Their Time
2011 Dodge Charger Commercial: The Future of Driving
The Story About the Truck Driver You Flipped Off Last Week