Similarity Function Tracking using Pairwise Comparisons

01/07/2017
by   Kristjan Greenewald, et al.
University of Michigan
MIT
0

Recent work in distance metric learning has focused on learning transformations of data that best align with specified pairwise similarity and dissimilarity constraints, often supplied by a human observer. The learned transformations lead to improved retrieval, classification, and clustering algorithms due to the better adapted distance or similarity measures. Here, we address the problem of learning these transformations when the underlying constraint generation process is nonstationary. This nonstationarity can be due to changes in either the ground-truth clustering used to generate constraints or changes in the feature subspaces in which the class structure is apparent. We propose Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD), a general adaptive, online approach for learning and tracking optimal metrics as they change over time that is highly robust to a variety of nonstationary behaviors in the changing metric. We apply the OCELAD framework to an ensemble of online learners. Specifically, we create a retro-initialized composite objective mirror descent (COMID) ensemble (RICE) consisting of a set of parallel COMID learners with different learning rates, and demonstrate parameter-free RICE-OCELAD metric learning on both synthetic data and a highly nonstationary Twitter dataset. We show significant performance improvements and increased robustness to nonstationary effects relative to previously proposed batch and online distance metric learning algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/11/2016

Nonstationary Distance Metric Learning

Recent work in distance metric learning has focused on learning transfor...
05/26/2021

Exploring dual information in distance metric learning for clustering

Distance metric learning algorithms aim to appropriately measure similar...
06/20/2012

Bayesian Active Distance Metric Learning

Distance metric learning is an important component for many tasks, such ...
10/07/2020

Low-Rank Robust Online Distance/Similarity Learning based on the Rescaled Hinge Loss

An important challenge in metric learning is scalability to both size an...
04/15/2021

Sparse online relative similarity learning

For many data mining and machine learning tasks, the quality of a simila...
10/14/2016

Improved Strongly Adaptive Online Learning using Coin Betting

This paper describes a new parameter-free online learning algorithm for ...
02/13/2020

Multiple Metric Learning for Structured Data

We address the problem of merging graph and feature-space information wh...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The effectiveness of many machine learning and data mining algorithms depends on an appropriate measure of pairwise distance between data points that accurately reflects the learning task, e.g., prediction, clustering or classification. The kNN classifier, K-means clustering, and the Laplacian-SVM semi-supervised classifier are examples of such

distance-based

machine learning algorithms. In settings where there is clean, appropriately-scaled spherical Gaussian data, standard Euclidean distance can be utilized. However, when the data is heavy tailed, multimodal, or contaminated by outliers, observation noise, or irrelevant or replicated features, use of Euclidean inter-point distance can be problematic, leading to bias or loss of discriminative power.

To reduce bias and loss of discriminative power of distance-based machine learning algorithms, data-driven approaches for optimizing the distance metric have been proposed. These methodologies, generally taking the form of dimensionality reduction or data “whitening,” aim to utilize the data itself to learn a transformation of the data that embeds it into a space where Euclidean distance is appropriate. Examples of such techniques include Principal Component Analysis

[1], Multidimensional Scaling [2]

, covariance estimation

[2, 1], and manifold learning [3]. Such unsupervised methods do not exploit human input on the distance metric, and they overly rely on prior assumptions, e.g., local linearity or smoothness.

In distance metric learning one seeks to learn transformations of the data associated with a distance metric that is well matched to a particular task specified by the user. Pairwise labels or “edges” indicating point similarity or dissimilarity are used to learn a transformation of the data such that similar points are “close” to one another and dissimilar points are distant in the transformed space. Learning distance metrics in this manner allows a more precise notion of distance or similarity to be defined that is better related to the task at hand.

Figure 1 illustrates this notion. Data points, or nodes, have underlying similarities or distances between them. Absent an exhaustive label set, given an attribute distance function

it is possible to infer similarities between nodes as the distance between their attribute vectors. As an example, the kNN algorithm uses the Euclidean distance to infer similarity. However, the distance function must be specified a priori, and may not match the distance relevant to the task. Distance metric learning proposes a hybrid approach, where one is given a small number of pairwise labels, uses these to learn a distance function on the attribute space, and then uses this learned function to infer relationships between the rest of the nodes.

Many supervised and semi-supervised distance metric learning approaches have been developed for machine learning and data mining [4]. This includes online algorithms [5] with regret guarantees for situations where similarity constraints are received sequentially.

This paper proposes a new distance metric tracking method that is applicable to the non-stationary time varying case of distance metric drift and has provably strongly adaptive tracking performance.

Fig. 1: Similarity functions on networks, with different clusters indicated by different colored nodes. Attributes of nodes denoted as a 5-element column vector with an unknown similarity function between attributes. Learn and track similarity function implied by observed edges, use result to infer similarities between other nodes.

Specifically, we suppose the underlying ground-truth (or optimal) distance metric from which constraints are generated is evolving over time, in an unknown and potentially nonstationary way. In Figure 1, this corresponds to having the relationships between nodes change over time. This can, for example, be caused by changes in the set of features indicative of relations (e.g. polarizing buzzwords in collective discourse), changes in the underlying relationship structure (e.g. evolving communities), and/or changes in the nature of the relationships relevant to the problem or to the user. When any of these changes occur, it is imperative to be able to detect and adapt to them without casting aside previous knowledge.

We propose a strongly adaptive, online approach to track the underlying metric as the constraints are received. We introduce a framework called Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD), which at every time step evaluates the recent performance of and optimally combines the outputs of an ensemble of online learners, each operating under a different drift-rate assumption. We prove strong bounds on the dynamic regret of every subinterval, guaranteeing strong adaptivity and robustness to nonstationary metric drift such as discrete shifts, slow drift with a widely-varying drift rate, and all combinations thereof. Applying OCELAD to the problem of nonstationary metric learning, we find that it gives excellent robustness and low regret when subjected to all forms of nonstationarity.

Social media provides some of the most dynamic, rapidly changing data sources available. Constant changes in world events, popular culture, memes, and other items of discussion mean that the words and concepts characteristic of subcultures, communities, and political persuasions are rapidly evolving in a highly nonstationary way. As this is exactly the situation our dynamic metric learning approach is designed to address, we will consider modeling political tweets in November 2015, during the early days of the United States presidential primary.

I-a Related Work

Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are classic examples of the use of linear transformations for projecting data into more interpretable low dimensional spaces. Unsupervised PCA seeks to identify a set of axes that best explain the variance contained in the data. LDA takes a supervised approach, minimizing the intra-class variance and maximizing the inter-class variance given class labeled data points.

Much of the recent work in Distance Metric Learning has focused on learning Mahalanobis distances on the basis of pairwise similarity/dissimilarity constraints. These methods have the same goals as LDA; pairs of points labeled “similar” should be close to one another while pairs labeled “dissimilar” should be distant. MMC [6], a method for identifying a Mahalanobis metric for clustering with side information, uses semidefinite programming to identify a metric that maximizes the sum of distances between points labeled with different classes subject to the constraint that the sum of distances between all points with similar labels be less than or equal to some constant.

Large Margin Nearest Neighbor (LMNN) [7] similarly uses semidefinite programming to identify a Mahalanobis distance. In this setting, the algorithm minimizes the sum of distances between a given point and its similarly labeled neighbors while forcing differently labeled neighbors outside of its neighborhood. This method has been shown to be computationally efficient [8] and, in contrast to the similarly motivated Neighborhood Component Analysis [9], is guaranteed to converge to a globally optimal solution. Information Theoretic Metric Learning (ITML) [10] is another popular Distance Metric Learning technique. ITML minimizes the Kullback-Liebler divergence between an initial guess of the matrix that parameterizes the Mahalanobis distance and a solution that satisfies a set of constraints. For surveys of the metric learning literature, see [4, 11, 12].

In a dynamic environment, it is necessary to track the changing metric at different times, computing a sequence of estimates of the metric, and to be able to compute those estimates online. Online learning [13] meets these criteria by efficiently updating the estimate every time a new data point is obtained instead of minimizing an objective function formed from the entire dataset. Many online learning methods have regret guarantees, that is, the loss in performance relative to a batch method is provably small [13, 14]. In practice, however, the performance of an online learning method is strongly influenced by the learning rate, which may need to vary over time in a dynamic environment [15, 16, 17], especially one with changing drift rates.

Adaptive online learning methods attempt to address the learning rate problem by continuously updating the learning rate as new observations become available. For learning static parameters, AdaGrad-style methods [16, 17] perform gradient descent steps with the step size adapted based on the magnitude of recent gradients. Follow the regularized leader (FTRL) type algorithms adapt the regularization to the observations [18]. Recently, a method called Strongly Adaptive Online Learning (SAOL) has been proposed for learning parameters undergoing

discrete changes when the loss function is bounded between 0 and 1. SAOL maintains several learners with different learning rates and randomly selects the best one based on recent performance

[15]. Several of these adaptive methods have provable regret bounds [18, 19, 20]. These typically guarantee low total regret (i.e. regret from time 0 to time ) at every time [18]. SAOL, on the other hand, attempts to have low static regret on every subinterval, as well as low regret overall [15]. This allows tracking of discrete changes, but not slow drift. Our work improves upon the capabilities of SAOL by allowing for unbounded loss functions, using a convex combination of the ensemble instead of simple random selection, and providing guaranteed low regret when all forms of nonstationarity occur, not just discrete shifts. All of these additional capabilities are shown in Section VI to be critical for good metric learning performance.

The remainder of this paper is structured as follows. In Section II we formalize the time varying distance metric tracking problem, and section III presents the basic COMID online learner and our Retro-Initialized COMID Ensemble (RICE) of learners with dyadically scaled learning rates. Section IV presents our OCELAD algorithm, a method of adaptively combining learners with different learning rates. Strongly adaptive bounds on the dynamic regret of OCELAD and RICE-OCELAD are presented in Section V, and results on both synthetic data and the Twitter dataset are presented in Section VI. Section VII concludes the paper.

Ii Nonstationary Metric Learning

Metric learning seeks to learn a metric that encourages data points marked as similar to be close and data points marked as different to be far apart. The time-varying Mahalanobis distance at time is parameterized by as

(1)

where .

Suppose a temporal sequence of similarity constraints are given, where each constraint is the triplet , and are data points in , and the label if the points are similar at time and if they are dissimilar.

Following [5], we introduce the following margin based constraints for all time points :

(2)

where is a threshold that controls the margin between similar and dissimilar points. A diagram illustrating these constraints and their effect is shown in Figure 2. These constraints are softened by penalizing violation of the constraints with a convex loss function . This gives a combined loss function

(3)

where , is the regularizer and the regularization parameter. Kunapuli and Shavlik [5] propose using nuclear norm regularization (

) to encourage projection of the data onto a low dimensional subspace (feature selection/dimensionality reduction), and we have also had success with the elementwise L1 norm (

). In what follows, we develop an adaptive online method to minimize the loss subject to nonstationary smoothness constraints on the sequence of metric estimates .

Fig. 2: Visualization of the margin based constraints (2), with colors indicating class. The goal of the metric learning constraints is to move target neighbors towards the point of interest (POI), while moving points from other classes away from the target neighborhood.

Iii Retro-initialized COMID ensemble (RICE)

Viewing the acquisition of new data points as stochastic realizations of the underlying distribution [5] suggests the use of composite objective stochastic mirror descent techniques (COMID). For convenience, we set .

For the loss (3) and learning rate , application of COMID [14] gives the online learning update

(4)

where is any Bregman divergence. As this is an online framework, the indexing directly corresponds to the received time series of pairwise constraints . In [5] a closed-form algorithm for solving the minimization in (18) with

is developed for a variety of common losses and Bregman divergences, involving rank one updates and eigenvalue shrinkage.

The output of COMID depends strongly on the choice of . Critically, the optimal learning rate depends on the rate of change of [21], and thus will need to change with time to adapt to nonstationary drift. Choosing an optimal sequence for is clearly not practical in an online setting with nonstationary drift, since the drift rate is changing. We thus propose to maintain an ensemble of learners with a range of values, whose output we will adaptively combine for optimal nonstationary performance. If the range of is diverse enough, one of the learners in the ensemble should have good performance on every interval. Critically, the optimal learner in the ensemble may vary widely with time, since the drift rate and hence the optimal learning rate changes in time. For example, if a large discrete change occurs, the fast learners are optimal at first, followed by increasingly slow learners as the estimate of the new value improves. In other words, the optimal approach is fast reaction followed by increasing refinement, in a manner consistent with the attractive decay of the learning rate of optimal nonadaptive algorithms [21].

Fig. 3: Retro-initialized COMID ensemble (RICE). COMID learners at multiple scales run in parallel, with the interval learners learning on the dyadic set of intervals . Recent observed losses for each learner are used to create weights used to select the appropriate scale at each time. Each yellow and red learner is initialized by the output of the previous learner of the same color, that is, the learner of the next shorter scale.

Define a set of intervals such that the lengths of the intervals are proportional to powers of two, i.e. , , with an arrangement that is a dyadic partition of the temporal axis, as in [15]. The first interval of length starts at (see Figure 3), and additional intervals of length exist such that the rest of time is covered.

Every interval is associated with a base COMID learner that operates on that interval. Each learner (18) has a constant learning rate proportional to the inverse square of the length of the interval, i.e. . Each learner (besides the coarsest) at level () is initialized to the last estimate of the next coarsest learner (level ) (see Figure 3). This strategy is equivalent to “backdating” the interval learners so as to ensure appropriate convergence has occurred before the interval of interest is reached, and is effectively a quantized square root decay of the learning rate. We call our method of forming an ensemble of COMID learners on dyadically nested intervals the Retro-Initialized COMID Ensemble, or RICE, and summarize it in Figure 3.

At a given time , a set of intervals/COMID learners are active, running in parallel. Because the metric being learned is changing with time, learners designed for low regret at different scales (drift rates) will have different performance (analogous to the classical bias/variance tradeoff). In other words, there is a scale optimal at a given time.

To adaptively select and fuse the outputs of the ensemble, we introduce Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD), a method that accepts an ensemble of black-box learners and uses recent history to adaptively form an optimal weighted combination at each time.

Iv Ocelad

To maintain generality, in this section we assume the series of random loss functions is of the form where is the time-varying unknown parameters. We assume that an ensemble of online learners is provided on the dyadic interval set , each optimized for the appropriate scale. To select the appropriate scale, we compute weights that are updated based on the learner’s recent estimated regret. The weight update we use is inspired by the multiplicative weight (MW) literature [22], modified to allow for unbounded loss functions. At each step, we rescale the observed losses so they lie between -1 and 1, allowing for maximal weight differentiation while preventing negative weights.

(5)

These hold for all , where , are the outputs at time of the learner on interval , and is called the estimated regret of the learner on interval at time . The initial value of is . Essentially, (5) is highly weighting low loss learners and lowly weighting high loss learners.

For any given time , the outputs of the learners of interval are combined to form the weighted ensemble estimate

(6)

The weighted average of the ensemble is justified due to our use of a convex loss function (proven in the next section), as opposed to the possibly non-convex losses of [22], necessitating a randomized selection approach. OCELAD is summarized in Algorithm 1, and the joint RICE-OCELAD approach as applied to metric learning of is shown in Algorithm 2.

1:  Provide dyadic ensemble of online learners .
2:  Initialize weight: .
3:  for  to  do
4:     Observe loss function and update ensemble.
5:     Obtain estimates from the ensemble.
6:     Compute weighted ensemble average via (6) and set as estimate.
7:     Update weights via (5).
8:  end for
9:  Return .
Algorithm 1 Online Convex Ensemble Strongly Adaptive Dynamic Learning (OCELAD)
1:  Initialize weight:
2:  for  to  do
3:     Obtain constraint , compute loss function .
4:     Initialize new learner in RICE if needed. New learner at scale : initialize to the last estimate of learner at scale .
5:     COMID update using (18) for all active learners in RICE ensemble.
6:     Compute
7:     for  do
8:        Compute estimated regret and update weights according to (5) with .
9:     end for
10:  end for
11:  Return .
Algorithm 2 RICE-OCELAD for Nonstationary Metric Learning

V Strongly Adaptive Dynamic Regret

The standard static regret of an online learning algorithm generating an estimate sequence is defined as

(7)

where is a loss with parameter . Since in our case the optimal parameter value is changing, the static regret of an algorithm on an interval is not useful. Instead, let be an arbitrary sequence of parameters. Then, the dynamic regret of an algorithm relative to any comparator sequence on the interval is defined as

(8)

where are generated by . This allows for comparison to any possible dynamically changing batch estimate .

In [21] the authors derive dynamic regret bounds that hold over all possible sequences such that , i.e. bounding the total amount of variation in the estimated parameter. Without this temporal regularization, minimizing the loss would cause to grossly overfit. In this sense, setting the comparator sequence to the “ground truth sequence” or “batch optimal sequence” both provide meaningful intuitive bounds.

Fig. 4: 25-dimensional synthetic dataset used for metric learning in Figure 5. Datapoints exist in , with two natural 3-way clusterings existing simultaneously in orthogonal 3-D subspaces A and B. The remaining 19 dimensions are isotropic Gaussian noise. Shown are the projections of the dataset onto subspaces A and B, as well as a projection onto a portion of the 19 dimensional isotropic noise subspace, with color codings corresponding to the cluster labeling associated with subspaces A and B. Observe that the data points in the left and right columns are identical, the only change is the cluster labels.
Fig. 5: Tracking of a changing metric. All results are averaged over 3000 random trials. Top: Rate of change (scaled Frobenius norm per tick) of the data generating random-walk drift matrix as a function of time. Two discrete changes in clustering labels are marked, causing all methods to have a sudden decrease in performance. The metric learners must track the random-walk drift as well as the discrete changes to have good performance. Metric tracking performance is computed for RICE-OCELAD (adaptive), nonadaptive COMID [5] (high learning rate), nonadaptive COMID (low learning rate), the batch solution (LMNN) [7], SAOL [15] and online ITML [10]

. Shown as a function of time is the mean k-NN error rate (middle) and the probability that the k-means normalized mutual information (NMI) exceeds

(bottom). Note that RICE-OCELAD alone is able to effectively adapt to the variety of discrete changes and changes in drift rate, and that the NMI of ITML and SAOL fails completely.

Strongly adaptive regret bounds [15] can provide guarantees that static regret is low on every subinterval, instead of only low in the aggregate. We use the notion of dynamic regret to introduce strongly adaptive dynamic regret bounds, proving that dynamic regret is low on every subinterval simultaneously. The following result is proved in the appendix. Suppose there are a sequence of random loss functions . The goal is to estimate a sequence that minimizes the dynamic regret.

Theorem 1 (General OCELAD Regret Framework).

Let be an arbitrary sequence of parameters and define as a function of and an interval . Choose an ensemble of learners such that given an interval the learner creates an output sequence satisfying the dynamic regret bound

(9)

for some constant . Then the strongly adaptive dynamic learner using as the ensemble creates an estimation sequence satisfying

on every interval .

In other words, the regret of OCELAD on any finite interval is sublinear in the length of that interval (), and scales with the amount of variation in true/optimal batch parameter estimates. The logarithmic term in exists because of the logarithmically increasing number of learners active at time , required to achieve guaranteed regret on intervals for which can be up to the order of .

In a dynamic setting, bounds of this type are particularly desirable because they allow for changing drift rate and guarantee quick recovery from discrete changes. For instance, suppose a number of discrete switches (large parameter changes or changes in drift rate) occur at times satisfying . Then since , this implies that the total expected dynamic regret on remains low (), while simultaneously guaranteeing that an appropriate learning rate is achieved on each subinterval .

Now, reconsider the dynamic metric learning problem of Section II. It is reasonable to assume that the transformed distance between any two points is bounded, implying and that . Thus the loss (and the gradient) are bounded. We can then show the COMID learners in the RICE ensemble have low dynamic regret. The proof of the following result is given in the appendix.

Corollary 1 (Dynamic Regret: Metric Learning COMID).

Let the sequence be generated by (18), and let be an arbitrary sequence with . Then using gives

(10)

and setting ,

(11)
(12)

Since the COMID learners have low dynamic regret on the metric learning problem, we can apply the OCELAD framework to the RICE ensemble.

Theorem 2 (Strongly Adaptive Dynamic Regret of RICE-OCELAD applied to metric learning).

Let be any sequence of metrics with on the interval , and define . Let be the RICE ensemble with . Then the RICE-OCELAD metric learning algorithm (Algorithm 2) satisfies

(13)

for every subinterval simultaneously. is a constant.

Vi Results

Vi-a Synthetic Data

We run our metric learning algorithms on a synthetic dataset undergoing different types of simulated metric drift. We create a synthetic 2000 point dataset with 2 independent three-way clusterings (denoted as clusterings A and B) of the points when projected onto orthogonal 3-dimensional subspaces of . The clusterings are formed as 3-D Gaussian blobs with cluster assignment probabilities .5, .3, and .2. The remaining 19 coordinates are filled with isotropic Gaussian noise. Specifically, datapoints are generated as

where , are independent,

is the standard deviation of the noise dimensions, and the

are the means and covariances associated with each blob. The label of under clustering A is , and the label of under clustering B is .

We create a scenario exhibiting nonstationary drift, combining continuous drifts and shifts between the two clusterings (A and B). To simulate continuous drift, at each time step we perform a random rotation of the dataset, i.e.

where is a random walk (analogous to Brownian motion) on the 25-D sphere of rotation matrices in , with chosen uniformly at random. The time-varying rate of change (random walk stepsize) chosen for is shown in Figure 5, with the small changes in at each time step accumulating to major changes over longer intervals. For the first interval, partition A is used and the dataset is static, no drift occurs (). Then, the partition is changed to B, followed by an interval of first moderate, then fast, and then moderate drift. Finally, the partition reverts back to A, followed by slow drift. The similarity labels are dictated by the partition active at time . In order to achieve good performance, the online metric learners must be able to track both large discrete changes (change in clustering) as well as the nonstationary gradual drift in .

We generate a series of constraints from random pairs of points in the dataset (, ) running each experiment with 3000 random trials. For each experiment conducted in this section, we evaluate performance using two metrics. We plot the K-nearest neighbor error rate, using the learned embedding at each time point, averaging over all trials. We quantify the clustering performance by plotting the empirical probability that the normalized mutual information (NMI) of the K-means clustering of the unlabeled data points in the learned embedding at each time point exceeds 0.8 (out of a possible 1). Clustering NMI, rather than k-NN classification performance, is a more intuitive and realistic indicator of metric learning performance, particularly when finding a relevant embedding in which the clusters are well-separated is the primary goal.

Fig. 6: Number of tweets per day over the month of November 2015 for four of the US presidential candidates’ political hashtags specified in the legend.
(a) OCELAD Metric Learning
(b) Time-windowed PCA
Fig. 7: Embeddings of political tweets during the last week of November 2015. Shown are the 2-D embeddings using the OCELAD learned metric from the midpoint of the week (a), and using PCA (b). Note the much more distinct groupings by candidate in the OCELAD metric embedding. Using 3-D embeddings, the LOO k-NN error rate is 7.8% in the OCELAD metric embedding and 60.6% in the PCA embedding.
(a) Beginning of the month (Nov 2): Aftermath of Oct 28 Republican debate and revelations from sister of Benghazi victim. Uniteblue campaign to unite Democrats.
(b) End of the month (Nov 30): Continued Benghazi scandal discussion, conservative criticism of University of Missouri protests, Sen. Cruz IRS/tax proposals.
(c) Hours before Nov 10 Republican debate: Discussion of Clinton Benghazi scandal, media bias, Bernie Sanders.
(d) Day after Nov 10 Republican debate: Importance of term “debate”, Sen. Cruz’s proposals for a flat tax and the abolishing of the IRS, and references to Trump “yuge” and Ben Carson.
Fig. 8: Changing metrics on political tweets. Shown are scatter plots of the 60 largest contributions of words to the first two learned metric components. The greater the distance of a word from the origin (marked as a red dot), the larger its contribution to the metric. For readability, we have moved in words with distance from the origin greater than a threshold. Note the changes in relevance and radial groupings of words before and after the Nov 10 Republican debate, and across the entire month.
(a) Sister of Benghazi victim spoke out Oct 23, leading to higher relevance early in November.
(b) Accusations of media bias during and after the CNBC Republican debate on Oct 28, but not at the FoxNews Republican debate on Nov 10. Increases in “debate”, “reaction”, loosely matching the aftermath of those debates, as well as the Nov 14 Democrat debate.
(c) The campaign known as Uniteblue attempted to unify the Democratic party, and ugly sweater promotions for Sanders occurred later in the month. “Uniteblue,” “feelthebern,” and “stophillary” uptick in relevance during Democratic debate.
(d) On Nov 9 a video of a University of Missouri professor blocking a journalist drew increased attention to liberal protests at that university, related to the rise of the “libcrib” and “mizzou” terms. Cruz policy proposals to limit gun control (“gunsense”) and abolish the IRS (“abolish”) become informative around and following the Nov 10 Republican debate.
Fig. 9: Alternate view of the Figure 8 experiment, showing as a function of time the relevance (distance from the origin in the embedding) of selected terms appearing in Figure 8. The rapid changes in several terms confirms the ability of OCELAD to rapidly adapt the metric to nonstationary changes in the data.

In our results, we consider RICE-OCELAD, SAOL with COMID [15], nonadaptive COMID [5], LMNN (batch) [7], and online ITML [10].

For RICE-OCELAD, we set the base interval length time step throughout, and set via cross-validation in a separate scenario with no drift, emphasizing that the parameters do not need to be tuned for different drift rates. All parameters for the other algorithms were set via cross validation, so as to err on the side of optimism in a truly online scenario. For nonadaptive COMID, we set the high learning rate using cross validation for moderate drift, and we set the low learning rate via cross validation in the case of no drift. The results are shown in Figure 5. Online ITML fails due to its bias agains low-rank solutions [10], and the batch method and low learning rate COMID fail due to an inability to adapt. The high learning rate COMID does well at first, but as it is optimized for slow drift it cannot adapt to the changes in drift rate as well or recover quickly from the two partition changes. SAOL, as it is designed for mildly-varying bounded loss functions without slow drift and does not use retro-initialized learners, completely fails in this setting (zero probability of NMI throughout). RICE-OCELAD, on the other hand, adapts well throughout the entire interval, as predicted by the theory.

Vi-B Tracking Metrics on Twitter

As noted in the introduction, social media represents a type of highly nonstationary, high dimensional and richly clustered data. We consider political tweets in November 2015, during the early days of the United States presidential primary, and attempt to learn time-varying metrics on the TF-IDF features.

We first extracted all available tweets containing the hashtags #trump2016, #cruz2016, #bernie2016, #hillary2016, representing the two most successful primary candidates from each of the two major parties. We then removed all hashtags from the tweets, and extracted 194 term frequency - inverse document frequency (TF-IDF) stemmed word features. TF-IDF features have been applied to various problems in Twitter data [23, 24, 25]. This provided us with a time series of hashtag-labeled 194-dimensional TF-IDF feature vectors. We chose to generate pairwise comparisons by considering time-adjacent tweets and labeling them as similar if they shared the same candidate hashtag, and dissimilar if they had different candidate hashtags. This created a time series of 13600 pairwise comparisons, with the time intervals between comparisons highly nonstationary, strongly depending on time of day, day of the week, and various other factors.

We ran RICE-OCELAD metric learning on this time series of pairwise comparisons, with the base interval set at length 1 and base learning rate set at 1. This emphasizes RICE-OCELAD’s complete freedom from tuning parameters. To illustrate the learned embedding on the TF-IDF stems, Figure 7 shows the projection of tweets from the last week of the month onto the first two principal components of the learned metric from the midpoint of the last week. Note the clear separation into clusters by political hashtag as desired, with a LOO-kNN error rate of 7.8% in the learned embedding. The standard PCA embedding, on the other hand, is highly disorganized, and suffers a 60.6% LOO-kNN error rate in the same scenario.

Having confirmed that our approach successfully learns the relevant embedding, we illustrate how the learned metric evolves throughout the month in response to changing events. For each metric , we computed the first two principle component vectors and . For each feature stem, we found the corresponding entries in , and used these as coordinates in a scatter plot, creating word/stem scatter plots (Figure 8). By way of interpretation, the scatter plot location of a word/stem is the point in the 2D embedding to which a tweet containing only that word would be mapped, and quantifies the contribution of each word/stem to the metric.

Figure 8 shows word stem scatter plots for the learned metrics at the beginning and end of the month, and the day of and the day after the televised November 10 Republican debate. Only the top 60 terms most relevant to the metric are shown for clarity. Observe the changing structure of the term embeddings, with new terms arising and leaving as the discussion evolves. An alternate view of this experiment is shown in Figure 9, showing the changing relevance of selected individual terms throughout the month. In the captions, we have mentioned explanatory contextual information that can be found in news articles from the period. In both figures, time-varying structure is evident, with Figure 8 emphasizing how similar embeddings of words indicate similar meaning/relevance to a candidate, and with Figure 9 emphasizing the nonstationary emergence and recession of clustering-relevant terms as the discussion evolves in response to news events.

The ability of RICE-OCELAD metric learning, without parameter tuning or specialized feature extraction, to successfully adapt the embedding and identify terms and their relevance to the discussion in this highly nonstationary environment confirms the power of our proposed methodology. RICE-OCELAD allows significant insight into complex, nonstationary data sources to be gleaned by tracking a task-relevant, adaptive, time-varying metric/low dimensional embedding of the data.

Vii Conclusion

Learning a metric on a complex dataset enables both unsupervised methods and/or a user to home in on the problem of interest while de-emphasizing extraneous information. When the problem of interest or the data distribution is nonstationary, however, the optimal metric can be time-varying. We considered the problem of tracking a nonstationary metric and presented an efficient, strongly adaptive online algorithm (OCELAD), that combines the outputs of any black box learning ensemble (such as RICE), and has strong theoretical regret guarantees. Performance of our algorithm was evaluated both on synthetic and real datasets, demonstrating its ability to learn and adapt quickly in the presence of changes both in the clustering of interest and in the underlying data distribution.

Potential directions for future work include the learning of more expressive metrics beyond the Mahalanobis metric, the incorporation of unlabeled data points in a semi-supervised learning framework

[26]

, and the incorporation of an active learning framework to select which pairs of data points to obtain labels for at any given time

[27].

Appendix A OCELAD - Strongly Adaptive Dynamic Regret

We will prove Theorem 1, giving strongly adaptive dynamic regret bounds. The bound for RICE-OCELAD applied to metric learning follows by combining this general result with Corollary 1.

Define as a function of

(14)

and set

(15)

Note that where is the indicator function for the interval , and assume that , i.e. the estimated regret is bounded, where the bound need not be known.

Recall our definition of the set of intervals such that the lengths of the intervals are proportional to powers of two, i.e. , , with an arrangement that is a dyadic partition of the temporal axis. The first interval of length starts at (see Figure 3), and additional intervals of length exist such that the rest of the time axis is covered.

We first prove a pair of lemmas.

Lemma 1.

for all .

Proof.

For all , by the definition of the set of dyadic intervals , we have that the number of intervals in with endpoint is given by , where indicates cardinality. Thus summing over all intervals in the dyadic set of intervals ,

Then

Suppose that . Furthermore, note that

since is convex. Thus

Since , the lemma follows by induction.

Lemma 2.

for every .

Proof.

Fix . Recall that

Since and for all ,

(16)

where we have used . By Lemma 1 we have

so

Combining with (16) and dividing by ,

since and . Since , this implies

Define the restriction of to an interval as . Note the following lemma from [15]:

Lemma 3.

Consider the arbitrary interval . Then, the interval can be partitioned into two finite sequences of disjoint and consecutive intervals, given by and , such that

This enables us to extend the bounds to every arbitrary interval and thus complete the proof.

Let be the partition described in Lemma 3. Then

(17)

By Lemma 2 and (9),

since by definition. By Lemma 3,

This bounds the first term of the right hand side of Equation (17). The bound for the second term can be found in the same way. Thus,

Since this holds for all , this completes the proof.

Appendix B Online DML Dynamic Regret

In this section, we derive the dynamic regret of the COMID metric learning algorithm. Recall that the COMID algorithm is given by

(18)

where is any Bregman divergence and is the learning rate parameter. From [21] we have:

Theorem 3.

Let the sequence , be generated via the COMID algorithm, and let be an arbitrary sequence in . Then using gives a dynamic regret

(19)

Using a nonincreasing learning rate , we can then prove a bound on the dynamic regret for a quite general set of stochastic optimization problems.

Applying this to our problem, we have

For being the hinge loss and ,

The other two quantities are guaranteed to exist and depend on the choice of Bregman divergence and . Thus,

Corollary 2 (Dynamic Regret: Metric Learning COMID).

Let the sequence be generated by (18), and let be an arbitrary sequence with . Then using gives

(20)

and setting ,

(21)
(22)

Corollary 2 is a bound on the regret relative to the batch estimate of that minimizes the total batch loss subject to a bounded variation . Also note that setting gives the same bound as (22).

In other words, we pay a linear penalty on the total amount of variation in the underlying parameter sequence. From (22), it can be seen that the bound-minimizing increases with increasing , indicating the need for an adaptive learning rate.

For comparison, if the metric is in fact static then by standard stochastic mirror descent results [21]

Theorem 4 (Static Regret).

If and , then

(23)

References

  • [1] C. M. Bishop, Pattern Recognition and Machine Learning.   Springer, 2006.
  • [2] T. Hastie, R. Tibshirani, J. Friedman, and J. Franklin, “The elements of statistical learning: data mining, inference and prediction,” The Mathematical Intelligencer, vol. 27, no. 2, pp. 83–85, 2005.
  • [3] J. A. Lee and M. Verleysen, Nonlinear dimensionality reduction.   Springer Science & Business Media, 2007.
  • [4] B. Kulis, “Metric learning: A survey.” Foundations and Trends in Machine Learning, vol. 5, no. 4, pp. 287–364, 2012.
  • [5] G. Kunapuli and J. Shavlik, “Mirror descent for metric learning: a unified approach,” in Machine Learning and Knowledge Discovery in Databases.   Springer, 2012, pp. 859–874.
  • [6] E. P. Xing, M. I. Jordan, S. Russell, and A. Y. Ng, “Distance metric learning with application to clustering with side-information,” in Advances in Neural Information Processing Systems, 2002, pp. 505–512.
  • [7] K. Q. Weinberger, J. Blitzer, and L. K. Saul, “Distance metric learning for large margin nearest neighbor classification,” in Advances in Neural Information Processing System, 2005, pp. 1473–1480.
  • [8] K. Q. Weinberger and L. K. Saul, “Fast solvers and efficient implementations for distance metric learning,” in ICML, 2008, pp. 1160–1167.
  • [9] J. Goldberger, G. E. Hinton, S. T. Roweis, and R. Salakhutdinov, “Neighbourhood components analysis,” in Advances in neural information processing systems, 2004, pp. 513–520.
  • [10] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon, “Information-theoretic metric learning,” in ICML, 2007, pp. 209–216.
  • [11] A. Bellet, A. Habrard, and M. Sebban, “A survey on metric learning for feature vectors and structured data,” arXiv preprint arXiv:1306.6709, 2013.
  • [12] L. Yang and R. Jin, “Distance metric learning: A comprehensive survey,” Michigan State Universiy, vol. 2, 2006.
  • [13] N. Cesa-Bianchi and G. Lugosi, Prediction, learning, and games.   Cambridge University Press, 2006.
  • [14] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari, “Composite objective mirror descent,” in COLT.   Citeseer, 2010, pp. 14–26.
  • [15] A. Daniely, A. Gonen, and S. Shalev-Shwartz, “Strongly adaptive online learning,” ICML, 2015.
  • [16] H. B. McMahan and M. Streeter, “Adaptive bound optimization for online convex optimization,” in COLT, 2010.
  • [17]

    J. C. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” in

    COLT, 2010.
  • [18] H. B. McMahan, “Analysis techniques for adaptive online learning,” arXiv preprint arXiv:1403.3465, 2014.
  • [19] M. Herbster and M. K. Warmuth, “Tracking the best expert,” Machine Learning, vol. 32, no. 2, pp. 151–178, 1998.
  • [20] E. Hazan and C. Seshadhri, “Adaptive algorithms for online decision problems,” in Electronic Colloquium on Computational Complexity (ECCC), vol. 14, no. 088, 2007.
  • [21] E. Hall and R. Willett, “Online convex optimization in dynamic environments,” Selected Topics in Signal Processing, IEEE Journal of, vol. 9, no. 4, pp. 647–662, June 2015.
  • [22] A. Blum and Y. Mansour, “From external to internal regret,” in Learning theory.   Springer, 2005, pp. 621–636.
  • [23] A. Signorini, A. M. Segre, and P. M. Polgreen, “The use of twitter to track levels of disease activity and public concern in the u.s. during the influenza a h1n1 pandemic,” PLOS ONE, vol. 6, no. 5, pp. 1–10, 05 2011. [Online]. Available: http://dx.doi.org/10.1371%2Fjournal.pone.0019467
  • [24] E. Antoine, A. Jatowt, S. Wakamiya, Y. Kawai, and T. Akiyama, “Portraying collective spatial attention in twitter,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’15.   New York, NY, USA: ACM, 2015, pp. 39–48. [Online]. Available: http://doi.acm.org/10.1145/2783258.2783418
  • [25] S. Petrović, M. Osborne, and V. Lavrenko, “Streaming first story detection with application to twitter,” in Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics.   Association for Computational Linguistics, 2010, pp. 181–189.
  • [26] M. Bilenko, S. Basu, and R. J. Mooney, “Integrating constraints and metric learning in semi-supervised clustering,” in ICML, 2004, p. 11.
  • [27] B. Settles, “Active learning,”

    Synthesis Lectures on Artificial Intelligence and Machine Learning

    , vol. 6, no. 1, pp. 1–114, 2012.