‘Inaudible’ Watermark Could Identify AI-Generated Voices – Slashdot

The growing ease with which anyone can create convincing audio in someone else’s voice has a lot of people on edge, and rightly so. Resemble AI’s proposal for watermarking generated speech may not fix it in one go, but it’s a step in the right direction. From a report: AI-generated speech is being used for all kinds of legitimate purposes, from screen readers to replacing voice actors (with their permission, of course). But as with nearly any technology, speech generation can be turned to malicious ends as well, producing fake quotes by politicians or celebrities. It’s highly desirable to find a way to tell real from fake that doesn’t rely on a publicist or close listening.

[…] Resemble AI is among a new cohort of generative AI startups aiming to use finely tuned speech models to produce dubs, audiobooks, and other media ordinarily produced by regular human voices. But if such models, perhaps trained on hours of audio provided by actors, were to fall into malicious hands, these companies may find themselves at the center of a PR disaster and perhaps serious liability. So it’s very much in their interest to find a way to make their recordings both as realistic as possible and easily verifiable as being generated by AI.

Source link