DeepAI AI Chat
Log In Sign Up

Towards Modelling Coherence in Spoken Discourse

by   Rajaswa Patil, et al.

While there has been significant progress towards modelling coherence in written discourse, the work in modelling spoken discourse coherence has been quite limited. Unlike the coherence in text, coherence in spoken discourse is also dependent on the prosodic and acoustic patterns in speech. In this paper, we model coherence in spoken discourse with audio-based coherence models. We perform experiments with four coherence-related tasks with spoken discourses. In our experiments, we evaluate machine-generated speech against the speech delivered by expert human speakers. We also compare the spoken discourses generated by human language learners of varying language proficiency levels. Our results show that incorporating the audio modality along with the text benefits the coherence models in performing downstream coherence related tasks with spoken discourses.


page 1

page 2

page 3

page 4


Analyzing Neural Discourse Coherence Models

In this work, we systematically investigate how well current models of c...

Contextualized Spoken Word Representations from Convolutional Autoencoders

A lot of work has been done recently to build sound language models for ...

Modeling Topical Coherence in Discourse without Supervision

Coherence of text is an important attribute to be measured for both manu...

NILC-Metrix: assessing the complexity of written and spoken language in Brazilian Portuguese

This paper presents and makes publicly available the NILC-Metrix, a comp...

Discourse Analysis for Evaluating Coherence in Video Paragraph Captions

Video paragraph captioning is the task of automatically generating a coh...

Object Referring in Visual Scene with Spoken Language

Object referring has important applications, especially for human-machin...