Amazon’s new pitch: let Alexa speak as your relatives from beyond the grave
Amazon’s new pitch: let Alexa speak as your relatives from beyond the grave
The prospective feature seeks to clone voices with minimal training data.
At Amazon’s Re:Mars conference, Alexa’s senior vice-president Rohit Prasad exhibited a startling new voice assistant capability: the supposed ability to mimic voices. So far, there’s no timeline whatsoever as to when or if this feature will be released to the public.
Stranger still, Amazon framed this copycatting ability as a way to commemorate lost loved ones. It played a demonstration video in which Alexa read to a child in the voice of his recently deceased grandmother. Prasad stressed that the company was seeking ways to make AI as personal as possible. “While AI can’t eliminate that pain of loss, he said, “it can definitely make the memories last.” An Amazon spokesperson told Engadget that the new skill can create a synthetic voiceprint after being trained on as little as a minute of audio of the individual it’s supposed to be replicating.
Security experts have long held concerns that deep fake audio tools, which use text-to-speech technology to create synthetic voices, would pave the way for a flood of new scams. Voice cloning software has enabled a number of crimes, such as a 2020 incident in the United Arab Emirates where fraudsters fooled a bank manager into transferring $35 million after they impersonated a company director. But deep fake audio crimes are still relatively unusual, and the tools available to scammers are, for now, relatively primitive.
(30)