ConceptBeam: Concept Driven Target Speech Extraction

07/25/2022
by   Yasunori Ohishi, et al.
0

We propose a novel framework for target speech extraction based on semantic information, called ConceptBeam. Target speech extraction means extracting the speech of a target speaker in a mixture. Typical approaches have been exploiting properties of audio signals, such as harmonic structure and direction of arrival. In contrast, ConceptBeam tackles the problem with semantic clues. Specifically, we extract the speech of speakers speaking about a concept, i.e., a topic of interest, using a concept specifier such as an image or speech. Solving this novel problem would open the door to innovative applications such as listening systems that focus on a particular topic discussed in a conversation. Unlike keywords, concepts are abstract notions, making it challenging to directly represent a target concept. In our scheme, a concept is encoded as a semantic embedding by mapping the concept specifier to a shared embedding space. This modality-independent space can be built by means of deep metric learning using paired data consisting of images and their spoken captions. We use it to bridge modality-dependent information, i.e., the speech segments in the mixture, and the specified, modality-independent concept. As a proof of our scheme, we performed experiments using a set of images associated with spoken captions. That is, we generated speech mixtures from these spoken captions and used the images or speech signals as the concept specifiers. We then extracted the target speech using the acoustic characteristics of the identified segments. We compare ConceptBeam with two methods: one based on keywords obtained from recognition systems and another based on sound source separation. We show that ConceptBeam clearly outperforms the baseline methods and effectively extracts speech based on the semantic representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 7

page 8

page 9

research
09/19/2023

Improving Speaker Diarization using Semantic Information: Joint Pairwise Constraints Propagation

Speaker diarization has gained considerable attention within speech proc...
research
07/26/2017

SPEECH-COCO: 600k Visually Grounded Spoken Captions Aligned to MSCOCO Data Set

This paper presents an augmentation of MSCOCO dataset where speech is ad...
research
07/12/2017

Speaker-independent Speech Separation with Deep Attractor Network

Despite the recent success of deep learning for many speech processing t...
research
11/11/2015

Deep Multimodal Semantic Embeddings for Speech and Images

In this paper, we present a model which takes as input a corpus of image...
research
10/03/2022

SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model

Data-driven speech processing models usually perform well with a large a...
research
05/28/2023

CAPTDURE: Captioned Sound Dataset of Single Sources

In conventional studies on environmental sound separation and synthesis ...
research
12/11/2017

Learning Modality-Invariant Representations for Speech and Images

In this paper, we explore the unsupervised learning of a semantic embedd...

Please sign up or login with your details

Forgot password? Click here to reset