Time Series Learning using Monotonic Logical Properties
We propose a new paradigm for time-series learning where users implicitly specify families of signal shapes by choosing monotonic parameterized signal predicates. These families of predicates (also called specifications) can be seen as infinite Boolean feature vectors, that are able to leverage a user's domain expertise and have the property that as the parameter values increase, the specification becomes easier to satisfy. In the presence of multiple parameters, monotonic specifications admit trade-off curves in the parameter space, akin to Pareto fronts in multi-objective optimization, that separate the specifications that are satisfied from those that are not satisfied. Viewing monotonic specifications (and their trade-off curves) as "features" for time-series data, we develop a principled way to bestow a distance measure between signals through the lens of a monotonic specification. A unique feature of this approach is that, a simple Boolean predicate based on the monotonic specification can be used to explain why any two traces (or sets of traces) have a given distance. Given a simple enough specification, this enables relaying at a high level "why" two signals have a certain distance and what kind of signals lie between them. We conclude by demonstrating our technique with two case studies that illustrate how simple monotonic specifications can be used to craft desirable distance measures.
READ FULL TEXT