Asymmetric Proxy Loss for Multi-View Acoustic Word Embeddings

03/30/2022
by   Myunghun Jung, et al.
0

Acoustic word embeddings (AWEs) are discriminative representations of speech segments, and learned embedding space reflects the phonetic similarity between words. With multi-view learning, where text labels are considered as supplementary input, AWEs are jointly trained with acoustically grounded word embeddings (AGWEs). In this paper, we expand the multi-view approach into a proxy-based framework for deep metric learning by equating AGWEs with proxies. A simple modification in computing the similarity matrix allows the general pair weighting to formulate the data-to-proxy relationship. Under the systematized framework, we propose an asymmetric-proxy loss that combines different parts of loss functions asymmetrically while keeping their merits. It follows the assumptions that the optimal function for anchor-positive pairs may differ from one for anchor-negative pairs, and a proxy may have a different impact when it substitutes for different positions in the triplet. We present comparative experiments with various proxy-based losses including our asymmetric-proxy loss, and evaluate AWEs and AGWEs for word discrimination tasks on WSJ corpus. The results demonstrate the effectiveness of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2019

Additional Shared Decoder on Siamese Multi-view Encoders for Learning Acoustic Word Embeddings

Acoustic word embeddings — fixed-dimensional vector representations of a...
research
04/06/2023

Affect as a proxy for literary mood

We propose to use affect as a proxy for mood in literary texts. In this ...
research
11/14/2016

Multi-view Recurrent Neural Acoustic Word Embeddings

Recent work has begun exploring neural acoustic word embeddings---fixed-...
research
10/05/2015

Deep convolutional acoustic word embeddings using word-pair side information

Recent studies have been revisiting whole words as the basic modelling u...
research
06/22/2023

Deep Metric Learning with Soft Orthogonal Proxies

Deep Metric Learning (DML) models rely on strong representations and sim...
research
09/05/2020

A multi-view approach for Mandarin non-native mispronunciation verification

Traditionally, the performance of non-native mispronunciation verificati...
research
11/09/2020

Mask Proxy Loss for Text-Independent Speaker Recognition

Open-set speaker recognition can be regarded as a metric learning proble...

Please sign up or login with your details

Forgot password? Click here to reset