Detecting and Explaining Causes From Text For a Time Series Event

07/27/2017
by   Dongyeop Kang, et al.
0

Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2023

Causal discovery for time series with constraint-based model and PMIME measure

Causality defines the relationship between cause and effect. In multivar...
research
12/20/2021

Detection of causality in time series using extreme values

Consider two stationary time series with heavy-tailed marginal distribut...
research
08/08/2016

Revisiting Causality Inference in Memory-less Transition Networks

Several methods exist to infer causal networks from massive volumes of o...
research
07/28/2023

Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation

Recent approaches to empathetic response generation try to incorporate c...
research
06/25/2021

iReason: Multimodal Commonsense Reasoning using Videos and Natural Language with Interpretability

Causality knowledge is vital to building robust AI systems. Deep learnin...
research
03/17/2020

Construe: a software solution for the explanation-based interpretation of time series

This paper presents a software implementation of a general framework for...
research
11/20/2022

TSEXPLAIN: Explaining Aggregated Time Series by Surfacing Evolving Contributors

Aggregated time series are generated effortlessly everywhere, e.g., "tot...

Please sign up or login with your details

Forgot password? Click here to reset