"The team's goal is to build an A.I. system with the capability of a toddler."

The magic behind the Atari-playing AI

#Technology

Sun, Mar 1st, 2015 11:00 by capnasty NEWS

You've probably seen the video by now showcasing a self-learning artificial intelligence capable of mastering old Atari video games. On The New Yorker, Nicola Twilley explores the recent article by Nature discussing the continued development of this brain-inspired software — and the incredible distance between it and us.

DeepMind’s A.I. starts each game like an unhousebroken puppy. It is programmed to find a score rewarding, but is given no instruction in how to obtain that reward. Its first moves are random, made in ignorance of the game’s underlying logic. Some are rewarded with a treat—a score—and some are not. Buried in the DeepMind code, however, is an algorithm that allows the juvenile A.I. to analyze its previous performance, decipher which actions led to better scores, and change its future behavior accordingly. Combined with the deep neural network, this gives the program more or less the qualities of a good human gamer: the ability to interpret the screen, a knack for learning from past mistakes, and an overwhelming drive to win.

Whipping humanity’s ass at Fishing Derby may not seem like a particularly noteworthy achievement for artificial intelligence—nearly two decades ago, after all, I.B.M.’s Deep Blue computer beat Garry Kasparov, a chess grandmaster, at his own more intellectually aspirational game—but according to Zachary Mason, a novelist and computer scientist, it actually is.

  1425

 

You may also be interested in:

"We've been able to take a can of spray paint and put a touch screen on almost anything."
Artificial brain '10 years away'
Call for debate on killer robots
Unboxing a Factory Sealed IBM Compatible PC from 1988
The Boeing 787 Dreamliner: "an object lesson in how not to build an airplane."