“Instead of Alexa’s voice reading the book, it’s the baby’s grandmother’s voice,” Rohit Prasad, Alexa’s senior vice president and chief AI scientist, explained enthusiastically on Wednesday during a keynote address in Las Vegas. (Amazon founder Jeff Bezos owns The Washington Post.)
The demo was the first glimpse into Alexa’s latest feature, which – although still in development – will allow the voice assistant to repeat people’s voices from short audio clips. The goal, Prasad said, is to build greater trust with users by infusing AI with “the human traits of empathy and influence.”
The new feature can ‘make [loved ones’] Flashbacks,” Prasad said. But while the prospect of hearing the voice of a dead relative may be very moving, it also raises a myriad of security and ethical concerns, experts say.
“I don’t feel like our world is ready for easy-to-use voice cloning technology,” Rachel Tobak, CEO of San Francisco-based SocialProof Security, told The Washington Post. She added that such technology could be used to manipulate the public through fake audio or video clips.
“If cybercriminals can easily and reliably clone someone else’s voice using a small voice sample, they can use the voice sample to impersonate other individuals,” added Tupac, a cybersecurity expert. “This bad actor can then trick others into thinking they are the person they are impersonating, which can lead to fraud, data loss, account takeover, and more.”
There is a risk of blurring the lines between what is human and what is mechanical, said Tama Lever, professor of Internet studies at Curtin University in Australia.
“You won’t remember talking to the depths of Amazon…and its data-collection services if it’s talking to your grandmother, your grandfather’s voice, or the voice of a lost loved one.”
“In some ways, it’s like an episode of Black Mirror,” Leaver said, referring to the science fiction series that imagines a tech-themed future.
A Google engineer who believes that the company’s artificial intelligence has been achieved
Leaver added that the new Alexa feature also raises questions about consent — especially for people who never imagined their voice would be carried by an automated personal assistant after they died.
“There is a real slippery slope to using deceased people’s data in a way that is frightening on the one hand, but very unethical on the other because they never thought of using these effects in this way,” Leaver said.
Having recently lost his grandfather, Lever said he sympathized with the “temptation” of wanting to hear a loved one’s voice. But, he said, the possibility opens a door of consequences that society may not be willing to bear – for example, Who has the right to the little snippets people leave for World Wide Web influencers?
If my grandfather sent me 100 messages, do I have the right to put that into the system? And if so, who owns it? Does Amazon have that recording then? ‘ he asked. ‘Have you given up the right to my grandfather’s voice?’
Prasad did not go into such details during Wednesday’s speech. However, he postulated that the ability to imitate sounds was a product of “undoubtedly living in the golden age of artificial intelligence, where our dreams and science fiction became a reality.”
This AI model is trying to recreate the mind of Ruth Bader Ginsburg
If Amazon’s demo becomes a real feature, Leaver said people may need to start thinking about how they use their voice and likeness when they die.
“Should I consider in my will that I need to say, ‘My voice and my depicted history on social media is the property of my children, and they can decide whether or not they want to revive that in chat with me? ‘ asked Leaver.
“That’s a strange thing to say now. But it’s probably a question we should have an answer to before Alexa starts talking like me tomorrow.”
#Alexa #voice #voice #dead #relative