Adversarial Attacks on GMM i-vector based Speaker Verification Systems
This work investigates the vulnerability of Gaussian Mix-ture Model (GMM) i-vector based speaker verification (SV) systems to adversarial attacks, and the transferability of adversarial samples crafted from GMM i-vector based systemsto x-vector based systems. In detail, we formulate the GMM i-vector based system as a scoring function, and leverage the fast gradient sign method (FGSM) to generate adversarial samples through this function. These adversarial samples are used to attack both GMM i-vector and x-vector based systems. We measure the vulnerability of the systems by the degradation of equal error rate and false acceptance rate. Experimental results show that GMM i-vector based systems are seriously vulnerable to adversarial attacks, and the generated adversarial samples are proved to be transferable and pose threats to neural network speaker embedding based systems (e.g. x-vector systems).
READ FULL TEXT