On the robustness of non-intrusive speech quality model by adversarial examples

11/11/2022
by   Hsin-Yi Lin, et al.
0

It has been shown recently that deep learning based models are effective on speech quality prediction and could outperform traditional metrics in various perspectives. Although network models have potential to be a surrogate for complex human hearing perception, they may contain instabilities in predictions. This work shows that deep speech quality predictors can be vulnerable to adversarial perturbations, where the prediction can be changed drastically by unnoticeable perturbations as small as -30 dB compared with speech inputs. In addition to exposing the vulnerability of deep speech quality predictors, we further explore and confirm the viability of adversarial training for strengthening robustness of models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset