"The team's goal is to build an A.I. system with the capability of a toddler."

The magic behind the Atari-playing AI


Sun, Mar 1st, 2015 11:00 by capnasty NEWS

You've probably seen the video by now showcasing a self-learning artificial intelligence capable of mastering old Atari video games. On The New Yorker, Nicola Twilley explores the recent article by Nature discussing the continued development of this brain-inspired software — and the incredible distance between it and us.

DeepMind’s A.I. starts each game like an unhousebroken puppy. It is programmed to find a score rewarding, but is given no instruction in how to obtain that reward. Its first moves are random, made in ignorance of the game’s underlying logic. Some are rewarded with a treat—a score—and some are not. Buried in the DeepMind code, however, is an algorithm that allows the juvenile A.I. to analyze its previous performance, decipher which actions led to better scores, and change its future behavior accordingly. Combined with the deep neural network, this gives the program more or less the qualities of a good human gamer: the ability to interpret the screen, a knack for learning from past mistakes, and an overwhelming drive to win.

Whipping humanity’s ass at Fishing Derby may not seem like a particularly noteworthy achievement for artificial intelligence—nearly two decades ago, after all, I.B.M.’s Deep Blue computer beat Garry Kasparov, a chess grandmaster, at his own more intellectually aspirational game—but according to Zachary Mason, a novelist and computer scientist, it actually is.



You may also be interested in:

"The Australian government is developing technology that will deorbit small bits of space debris with ground-based lasers."
Is Bing Censoring Questions About Microsoft?
Man uses 35 cable modems to provide WiFi, sued by Comcast
Why Did Trains Have Cabooses and Now Don't Anymore?
"Customized functional devices that can be attached directly on their skin."