A Cross-Verification Approach for Protecting World Leaders from Fake and Tampered Audio
This paper tackles the problem of verifying the authenticity of speech recordings from world leaders. Whereas previous work on detecting deep fake or tampered audio focus on scrutinizing an audio recording in isolation, we instead reframe the problem and focus on cross-verifying a questionable recording against trusted references. We present a method for cross-verifying a speech recording against a reference that consists of two steps: aligning the two recordings and then classifying each query frame as matching or non-matching. We propose a subsequence alignment method based on the Needleman-Wunsch algorithm and show that it significantly outperforms dynamic time warping in handling common tampering operations. We also explore several binary classification models based on LSTM and Transformer architectures to verify content at the frame level. Through extensive experiments on tampered speech recordings of Donald Trump, we show that our system can reliably detect audio tampering operations of different types and durations. Our best model achieves 99.7 and a 0.43 non-matching.
READ FULL TEXT