"We’ve never before built machines that operate in ways their creators don’t understand."

It is not entirely clear how AI makes its decisions

#Technology

Wed, Apr 12th, 2017 11:00 by capnasty NEWS

Technology Review brings to attention a fascinating concern "at the heart of AI": as impressive as deep learning has been in resolving problems and figuring out solutions, nobody — not even the engineers who built the systems — are entirely sure how it all works. If there ever is a failure of these very complicated systems, "it might be difficult to find out why."

[...] Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

  253

 

You may also be interested in:

Can the space station's computers get a Windows virus?
Ideas Having Sex
Eye-Fi: Memory Card With Built-in WiFi Capabilities
The iPad is Great, But...
Squirt: Speed Read on the Web