The Guardian brings to attention the ability of Face2Face, software created by Stanford University, to create realistic looking video of people speaking using existing footage. Combined with voice impersonation, much like that created by Canadian startup Lyrebird, the erosion of trust in the media may become even more rampant.
On its own, Face2Face is a fun plaything for creating memes and entertaining late night talk show hosts. However, with the addition of a synthesized voice, it becomes more convincing – not only does the digital puppet look like the politician, but it can also sound like the politician.
A research team at the University of Alabama at Birmingham has been working on voice impersonation. With 3-5 minutes of audio of a victim’s voice – taken live or from YouTube videos or radio shows – an attacker can create a synthesized voice that can fool both humans and voice biometric security systems used by some banks and smartphones. The attacker can then talk into a microphone and the software will convert it so that the words sound like they are being spoken by the victim – whether that’s over the phone or on a radio show.
|Amazon Prime Air: Delivery of Packages Via Drones|
|Robotic Bees to Replace Organic Pollinators|
|3D Printed Guitars|
|Signs of the Near Future|
|Online Classes to Make Professors Extinct|
|Time Lapse of Planet Earth as Seen from the Space Station|
|“A short cut through spacetime allowing for travel over cosmic scale distances in a short period.”|
|CaptchaTweet: Write Tweets in Captcha Form|
|“If you fell asleep in 1945 and woke up in 2018 you would not recognize the world around you.”|
|“Reliably bottling up miniature stars, inside complex machines on Earth, demands otherworldly amounts of patience.”|
|How to Avoid Jury Duty|
|The (Very Scary) People of Public Transit|
|The Racist, Sexist Tendencies of AI|
|Read Advice People Wish They Had at Your Age|
|Walking Car Concept|
|Timelapse of a Tesla Model 3 Being Made|
|Recycled Vacuum Lamps|