Personalized Audio Quality Preference Prediction
This paper proposes to use both audio input and subject information to predict the personalized preference of two audio segments with the same content in different qualities. A siamese network is used to compare the inputs and predict the preference. Several different structures for each side of the siamese network are investigated, and an LDNet with PANNs' CNN6 as the encoder and a multi-layer perceptron block as the decoder outperforms a baseline model using only audio input the most, where the overall accuracy grows from 77.56 to 78.04 information, including age, gender, and the specifications of headphones or earphones, is more effective than using only a part of them.
READ FULL TEXT