Albeit some argue that AI aren't the danger, it's how people use them that will be the problem (sounds familiar), Kenan Malik of The Guardian believes that human panic may be the trigger in creating that dystopian technological future we dread.
The new report is different. It looks at technologies that are already available, or will be in the next five years, and identifies three kinds of threats: “digital” (sophisticated forms of phishing or hacking); “physical” (the repurposing of drones or robots for harmful ends); and “political” (new forms of surveillance or the use of fake videos to “manipulate public opinion on previously unimaginable scales”).
What we are faced with, this list suggests, is not an existential threat to humanity but sharper forms of the problems with which we are already grappling. AI should be seen not in terms of super-intelligent machines but as clever bits of software that, depending on the humans wielding them, can be used either for good or ill.
|Star Trek's holodeck coming soon|
|How To Turn A Pumpkin Into A Film Camera|
|“None of the company’s key executives has a ‘normal’ Facebook presence.”|
|Barcode replacement shown off|
|The Problems With Autopilots|
|CaptchaTweet: Write Tweets in Captcha Form|
|Recycled Vacuum Lamps|
|U.S.S. Enterprise Owner's Manual|
|Vintage Mobile Phones|
|Naked Preacher Lady [NSFW]|
|The (Very Scary) People of Public Transit|
|“The company is losing billions, has essentially no underlying value, and its business could be hammered overnight.”|
|Fake Name Generator|
|iPhone 6, the First Smartphone to Disrupt NSA's Spying|
|Creepstream: Looking at Insecure Camera Feeds From Around the World|
|“You can often hide from an AI video system with the aid of a simple color printout.”|