this post was submitted on 02 Oct 2022
11 points (100.0% liked)

Science

13259 readers
35 users here now

Subscribe to see new publications and popular science coverage of current research on your homepage


founded 5 years ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[โ€“] Helix@feddit.de 3 points 2 years ago

I bet this only reliably works on undistorted, uncompressed audio. Just throw some distortion over a HQ deepfake audio and try an adversarial network based on the research of these scientists. At some point we might have to switch to fully digitally signed communication with specially trusted devices.

[โ€“] pizza_is_yum 3 points 2 years ago* (last edited 2 years ago)

Cool. Btw, the authors tested their own 2 adversaries. The 1st failed to breach the defense, and the 2nd was deemed "impractical" because of how slow it took to train.

I appreciate their positive outlook, but I'm not so sure. They say they are well-defended because their equations are non-differentiable. That's true, but reinforcement learning (RL) can get around that. Also, I'm curious if attention-based adversaries would fare any better. Seems like those can do magic, given enough training time.

Great work though. I love this "explainable" and "generalizable" approach they've taken. It's awesome to see research in the ML space that doesn't just throw a black box at the problem and call it a day. We need more like this.