1 Introduction
MultipleInstance Learning (MIL) is a fundamental framework of supervised learning with a wide range of applications such as prediction of molecule activity, image classification, and so on. Since the notion of MIL was first proposed by
Dietterich et al. (1997), MIL has been extensively studied both in theoretical and practical aspects (Gärtner et al., 2002; Andrews et al., 2003; Sabato and Tishby, 2012; Zhang et al., 2013; Doran and Ray, 2014; Carbonneau et al., 2018).A standard MIL setting is described as follows: A learner receives sets called bags, each of which contains multiple instances. In the training phase, each bag is labeled but instances are not labeled individually. The goal of the learner is to obtain a hypothesis that predicts the labels of unseen bags correctly^{2}^{2}2Although there are settings where instance label prediction is also considered, we focus only on baglabel prediction in this paper.. One of the most common hypotheses used in practice has the following form:
(1) 
where is a feature map and
is a feature vector which we call a
shapelet. In many applications, is interpreted as a particular “pattern” in the feature space and the inner product as the similarity of from . Note that we use the term “shapelets” by following the terminology of Shapelet Learning, which is a framework for timeseries classification, although it is often called “concepts” in the literature of MIL. Intuitively, this hypothesis evaluates a given bag by the maximum similarity of the instances in the bag from the shapelet. MultipleInstance Support Vector Machine (MISVM) proposed by
Andrews et al. (2003) is a widely used algorithm that uses this hypothesis class and learns . It is wellknown that MIL algorithms using this hypothesis class practically perform well for various multipleinstance datasets. Moreover, a generalization error bound of the hypothesis class is given by Sabato and Tishby (2012).However, in some domains such as image recognition and document classification, it is said that the hypothesis class of (1) is not effective enough (see, e.g., Chen et al., 2006). To employ MIL on such domains more effectively, Chen et al. (2006) proposes to use a convex combination of various shapelets in a finite set , which is defined based on all instances that appear in the training sample,
(2) 
where
is a probability vector over
. They demonstrate that this hypothesis with the Gaussian kernel performs well in image recognition. However, no theoretical justification is known for the hypothesis class of type (2) with the finite set made from the empirical bags. By contrast, for sets of infinitely many shapelets with bounded norm, generalization bounds of Sabato and Tishby (2012) are applicable as well to the hypothesis class (2), but the result of Sabato and Tishby (2012) does not provide a practical formulation such as MISVM.1.1 Our Contributions
In this paper, we propose an MIL formulation with the hypothesis class (2) for sets of infinitely many shapelets. More precisely, we formulate a norm regularized soft margin maximization problem to obtain linear combinations of shapeletbased hypotheses.
Then, we design an algorithm based on Linear Programming Boosting
(LPBoost, Demiriz et al., 2002) that solves the soft margin optimization problem via a column generation approach. Although the subproblems (weak learning problem) become optimization problems over an infinitedimensional space, we can show that an analogue of the representer theorem holds on it and allows us to reduce it to a nonconvex optimization problem (difference of convex program, DCprogram for short) over a finitedimensional space. While it is difficult to solve the subproblems exactly due to nonconvexity, various techniques (e.g., Tao and Souad, 1988; Yu and Joachims, 2009) are investigated for DC programs and we can find good approximate solutions efficiently for many cases in practice.Furthermore, we prove a generalization error bound of hypothesis class (2) with infinitely large sets . In general, our bound is incomparable with those of Sabato and Tishby (2012), but ours has better rate in terms of the sample size .
We introduce an important application of our result, shapelet learning for timeseries classification (we show details later). In fact, in timeseries domain, most shapelet learning algorithms have been designed heuristically. As a result, our proposed algorithm becomes the first algorithm for shapelet learning in timeseries classification that guarantees the theoretical generalization performance.
Finally, the experimental results show that our approach performs favorably with a baseline for shapeletbased timeseries classification tasks and outperforms baselines for several MIL tasks.
1.2 Comparison to Related Work
There are many MIL algorithms with hypothesis classes which are different from (1) or (2). (e.g., Gärtner et al., 2002; Zhang et al., 2006; Chen et al., 2006). Many of them adopt different approaches for the baglabeling hypothesis from shapeletbased classifiers (e.g., Zhang et al. (2006) used a NoisyOR based hypothesis and Gärtner et al. (2002) proposed a new kernel called a set kernel).
Sabato and Tishby (2012) proved generalization bounds of hypotheses classes for MIL including those of (1) and (2) with infinitely large sets . They also proved the PAClearnability of the class (1) using the boosting approach under some technical assumptions. Their boosting approach is different from our work in that they assume that labels are consistent with some hypothesis of the form (1), while we consider arbitrary distributions over bags and labels.
1.3 Connection between MIL and Shapelet Learning for Time Series Classification
Here we briefly mention that MIL with type (2) hypotheses is closely related to Shapelet Learning (SL), which is a framework for timeseries classification and has been extensively studied by (Ye and Keogh, 2009; Keogh and Rakthanmanon, 2013; Hills et al., 2014; Grabocka et al., 2014)
in parallel to MIL. SL is a notion of learning with a particular method of feature extraction, which is defined by a finite set
of realvalued “short” sequences called shapelets and a similarity measure (not necessarily a Mercer kernel) in the following way. A time series can be identified with a bag consisting of all subsequences of of length . Then, the feature of is a vector of a fixed dimension regardless of the length of the time series . When we employ a linear classifier on top of the features, we obtain a hypothesis of the form(3) 
which is essentially the same form as (2), except that finding good shapelets is a part of the learning task, as well as to finding good weight vector . This is one of the most successful approach of SL (Hills et al., 2014; Grabocka et al., 2014, 2015; Renard et al., 2015; Hou et al., 2016), where a typical choice of is . However, almost all existing methods heuristically choose shapelets and have no theoretical guarantee on how good the choice of is.
Note also that in the SL framework, each is called a shapelet, while in this paper, we assume that is a kernel and any (not necessarily for some ) in the Hilbert space is called a shapelet.
Curiously, despite MIL and SL share similar motivations and hypotheses, the relationship between MIL and SL has not yet been pointed out. From the shapeletperspective in MIL, the hypothesis (1) is regarded as a “single shapelet”based hypothesis, and the hypothesis (2) is regarded as “multiple shapelet”based hypothesis. We refer to a linear combination of maximum similarities based on shapelets such as (2) and (3) as shapeletbased classifiers.
2 Preliminaries
Let be an instance space. A bag is a finite set of instances chosen from . The learner receives a sequence of labeled bags called a sample, where each labeled bag is independently drawn according to some unknown distribution over . Let denote the set of all instances that appear in the sample . That is, . Let be a kernel over , which is used to measure the similarity between instances, and let denote a feature map associated with the kernel for a Hilbert space , that is, for instances , where denotes the inner product over . The norm induced by the inner product is denoted by defined as for .
For each which we call a shapelet, we define a shapeletbased classifier denoted by , as the function that maps a given bag to the maximum of the similarity scores between shapelet and over all instances in . More specifically,
For a set , we define the class of shapeletbased classifiers as
and let denote the set of convex combinations of shapeletbased classifiers in . More precisely,
The goal of the learner is to find a hypothesis , so that its generalization error is small. Note that since the final hypothesis is invariant to any scaling of , we assume without loss of generality that
Let denote the empirical margin loss of over , that is, .
3 Optimization Problem Formulation
In this paper we formulate the problem as soft margin maximization with norm regularization, which ensures a generalization bound for the final hypothesis (see, e.g., Demiriz et al., 2002). Specifically, the problem is formulated as a linear programming problem (over infinitely many variables) as follows:
(4)  
sub.to  
where is a parameter. To avoid the integral over the Hilbert space, it is convenient to consider the dual form:
(5)  
sub.to  
The dual problem is categorized as a semiinfinite program (SIP) because it contains infinitely many constraints. Note that the duality gap is zero because the problem (5) is linear and the optimum is finite (Theorem 2.2 of Shapiro, 2009). We employ column generation to solve the dual problem: solve (5) for a finite subset , find to which the corresponding constraint is maximally violated by the current solution (column generation part), and repeat the procedure with until a certain stopping criterion is met. In particular, we use LPBoost (Demiriz et al., 2002), a wellknown and practically fast algorithm of column generation. Since the solution is expected to be sparse due to the 1norm regularization, the number of iterations is expected to be small.
Following the terminology of boosting, we refer to the column generation part as weak learning. In our case, weak learning is formulated as the following optimization problem:
(6) 
Thus, we need to design a weak learner for solving (6) for a given sample weighted by . It seems to be impossible to solve it directly because we only have access to through the associated kernel. Fortunately, we prove a version of representer theorem given below, which makes (6) tractable.
Theorem 1 (Representer Theorem)
The solution of (6) can be written as for some real numbers .
Our theorem can be derived from an application of the standard representer theorem (see, e.g., Mohri et al., 2012). Intuitively, we prove the theorem by decomposing the optimization problem (6) into a number of subproblems, so that the standard representer theorem can be applied to each of the subproblems. The detail of the proof is given in the supplementary materials. Note that Theorem 1 gives justification to the simple heuristics in the literature: choosing the shapelets extracted from .
Theorem 1 says that the weak learning problem can be rewritten as the following tractable form:
OP 1: Weak Learning Problem
sub.to 
Unlike the primal solution , the dual solution is not expected to be sparse. In order to obtain a more interpretable hypothesis, we propose another formulation of weak learning where 1norm regularization is imposed on , so that a sparse solution of will be obtained. In other words, instead of , we consider the feasible set , where is the 1norm of .
OP 2: Sparse Weak Learning Problem
sub.to 
Note that when running LPBoost with a weak learner for OP 3, we obtain a final hypothesis that has the same form of generalization bound as the one stated in Theorem 2, which is of a final hypothesis obtained when used with a weak learner for OP 3. To see this, consider a feasible space for a sufficiently small , so that . Then since , a generalization bound for also applies to . On the other hand, since the final hypothesis for is invariant to the scaling factor , the generalization ability is independent of .
4 Algorithms
For completeness, we present the pseudo code of LPBoost in Algorithm 1.
For the rest of this section, we describe our algorithms for the weak learners. For simplicity, we denote by a vector given by for every . Then, the objective function of OP 3 (and OP 3) can be rewritten as
which can be seen as a difference of two convex functions and of . Therefore, the weak learning problems are DC programs and thus we can use DC algorithm (Tao and Souad, 1988; Yu and Joachims, 2009) to find an approximation of a local optimum. We employ a standard DC algorithm. That is, for each iteration , we linearize the concave term with at the current solution , which is with in our case, and then update the solution to by solving the resultant convex optimization problem .
In addition, the problems for OP 3 and OP 3 are reformulated as a secondorder cone programming (SOCP) problem and an LP problem, respectively, and thus both problems can be efficiently solved. To this end, we introduce new variables for all negative bags with which represent the factors . Then we obtain the equivalent problem to for OP 3 as follows:
(7)  
sub.to  
It is well known that this is an SOCP problem. Moreover, it is clear that for OP 3 can be formulated as an LP problem. We describe the algorithm for OP 3 in Algorithm 2.
5 Generalization Bound of the Hypothesis Class
In this section, we provide a generalization bound of hypothesis classes for various and .
Let . Let . By viewing each instance
as a hyperplane
, we can naturally define a partition of the Hilbert space by the set of all hyperplanes . Let be the set of all cells of the partition, i.e., . Each cell is a polyhedron which is defined by a minimal set that satisfies . LetLet be the VC dimension of the set of linear classifiers over the finite set , given by .
Then we have the following generalization bound on the hypothesis class of (2).
Theorem 2
Let . Suppose that for any , . Then, for any , with high probability the following holds for any with :
(9) 
where (i) for any , , (ii) if and is the identity mapping (i.e., the associated kernel is the linear kernel), or (iii) if and satisfies the condition that is monotone decreasing with respect to (e.g., the mapping defined by the Gaussian kernel) and , then .
For space constraints, we omit the proof and it is shown in the supplementary materials.
Comparison with the existing bounds
A similar generalization bound can be derived from a known bound of the Rademacher complexity of (Theorem 20 of Sabato and Tishby, 2012) and a generalization bound of for any hypothesis class (see Corollary 6.1 of Mohri et al., 2012):
Note that Sabato and Tishby (2012) fixed . Here, for simplicity, we omit some constants of (Theorem 20 of Sabato and Tishby, 2012). Note that by definition. The bound above is incomparable to Theorem 2 in general, as ours uses the parameter and the other has the extra term. However, our bound is better in terms of the sample size by the factor of when other parameters are regarded as constants.
6 SL by MIL
6.1 TimeSeries Classification with Shapelets
In the following, we introduce a framework of timeseries classification problem based on shapelets (i.e. SL problem). As mentioned in Introduction, a time series can be identified with a bag consisting of all subsequences of of length . The learner receives a labeled sample , where each labeled bag (i.e. labeled time series) is independently drawn according to some unknown distribution over a finite support of . The goal of the learner is to predict the labels of an unseen time series correctly. In this way, the SL problem can be viewed as an MIL problem, and thus we can apply our algorithms and theory.
Note that, for timeseries classification, various similarity measures can be represented by a kernel. For example, the Gaussian kernel (behaves like the Euclidean distance) and Dynamic Time Warping (DTW) kernel. Moreover, our framework can generally apply to nonrealvalued sequence data (e.g., text, and a discrete signal) using a string kernel.
6.2 Our Theory and Algorithms for SL
By Theorem 2, we can immediately obtain the generalization bound of our hypothesis class in SL as follows:
Corollary 3
Consider timeseries sample of size and length . For any fixed , the following generalization error bound holds for all in which the length of shapelet is :
To the best of our knowledge, this is the first result on the generalization performance of SL. Note that the bound can also provide a theoretical justification for some existing shapeletbased methods. This is because many of the existing methods find effective shapelets from all of the subsequences in the training sample, and the linear convex combination of the hypothesis class using such shapelets is a subset of the hypothesis class that we provided.
For timeseries classification problem, shapeletbased classification has a greater advantage of the interpretability or visibility than the other timeseries classification methods (see, e.g., Ye and Keogh, 2009). Although we use a nonlinear kernel function, we can also observe important subsequences that contribute a shapelet by solving OP 3 because of the sparsity (see also the experimental results). Moreover, for unseen timeseries data, we can observe which subsequences contribute the predicted class by observing maximizer .
7 Experiments
In the following experiments, we demonstrate that our methods are practically effective for timeseries data and multipleinstance data. Note that we use some heuristics for improving efficiency of our algorithm in practice (see details in supplementary materials). We use
means clustering in the heuristics, and thus we show the average accuracies and standard deviations for our results considering the randomness of
means.7.1 Results for TimeSeries Data
We used several binary labeled datasets^{3}^{3}3Our method is applicable to multiclass classification tasks by easy expansion (e.g., Platt et al., 2000) in UCR datasets (Chen et al., 2015), which are often used as benchmark datasets for timeseries classification methods. The detailed information of the datasets is described on the leftside of Table 1. We used a weak learning problem OP 3 because the interpretability of the obtained classifier is required in shapeletbased timeseries classification. We set the hyperparameters as follows: Length of the subsequences ( corresponds to the dimension of instances in MIL) was searched in , where is the length of each time series in the dataset, and we choose from . We used the Gaussian kernel . We choose from . We found good , , and through a grid search via five runs of fold cross validation. As an LP solver for WeakLearn and LPBoost we used the CPLEX software.
Accuracy and efficiency
The classification accuracy results are shown on the righthand side of Table 1. We referred to the experimental results reported by Bagnall et al. (2017) with regard to accuracy of ST method (Hills et al., 2014) as a baseline. Bagnall et al. (2017) fairly compared many kinds of timeseries classification methods and reported that ST achieved higher accuracy than the other shapeletbased methods. Our method performed better than ST for five datasets, but worse for the other six datasets. Our conjecture is that one reason for some of the worse results is that ST methods consider all possible lengths () of subsequences as shapelets without limiting the computational cost. The main scheme in ST method is searching effective shapelets, and the time complexity of it depends on (see also the real computation time in Hills et al., 2014). We cannot compare the time complexity of our method with that of ST because the time complexity of our method mainly depends on the LP solver (boosting converged in several tens of iterations empirically). Thus, we present the computation time per single learning with the best parameter in the rightmost column of Table 1. The experiments are done with a machine with Intel Core i7 CPU with 4 GHz and 32 GB memory. The result demonstrated that our method efficiently ran in practice. As a result, we can say that our method performed favorably with ST while we limited the length of shapelets in the experiment.
dataset  #train  #test  ST  our method  comp. time  

Coffee  28  28  286  0.995  1.000 0.000  3.6 
ECG200  100  100  96  0.840  0.877 0.009  15.9 
ECGFiveDays  23  861  136  0.955  1.000 0.000  12.2 
GunPoint  50  150  150  0.999  0.976 0.006  4.7 
ItalyPower.  67  1029  24  0.953  0.932 0.009  5.7 
MoteStrain  20  1252  84  0.882  0.754 0.019  9.7 
ShapeletSim  67  1029  24  0.934  0.994 0.000  11.0 
SonyAIBO1  20  601  70  0.888  0.944 0.032  3.8 
SonyAIBO2  20  953  65  0.924  0.871 0.022  6.2 
ToeSeg.1  40  228  277  0.954  0.911 0.025  20.2 
ToeSeg.2  36  130  343  0.947  0.840 0.017  33.3 
dataset  sample size  #dim.  miSVM w/ best kernel  MISVM w/ best kernel  Ours w/ Gauss. kernel 

MUSK1  92  166  0.834 0.043  0.8335 0.041  0.8509 0.037 
MUSK2  102  166  0.736 0.040  0.840 0.037  0.8587 0.038 
elephant  200  230  0.802 0.028  0.822 0.028  0.8210 0.027 
fox  200  230  0.618 0.035  0.581 0.045  0.6505 0.037 
tiger  200  230  0.765 0.039  0.815 0.029  0.8280 0.024 
Interpretability of our method
In order to show the interpretability of our method, we introduce two types of visualization of our result.
One is the visualization of the characteristic subseqences of an input time series. When we predict the label of the time series , we calculate a maximizer in for each , that is, . In image recognition tasks, the maximizers are commonly used to observe the subimages that characterize the class of the input image (e.g., Chen et al., 2006). In timeseries classification task, the maximizers also can be used to observe some characteristic subsequences. Figure 1(a) is an example of visualization of maximizers. Each value in the legend indicates . That is, Subsequences with positive values contribute the positive class and subsequences with negative values contribute the negative class. Such visualization provides the subsequences that characterize the class of the input time series.
The other is the visualization of a final hypothesis , where ( is the set of representative subsequences, see details in supplementary materials). Figure 1(b) is an example of visualization of a final hypothesis obtained by our method. The colored lines are all the s in where both and were nonzero. Each value of the legends shows the multiplication of and corresponding to . That is, positive values on the colored lines indicate the contribution rate for the positive class, and negative values indicate the contribution rate for the negative class. Note that, because it is difficult to visualize the shapelets over the Hilbert space associated with the Gaussian kernel, we plotted each of them to match the original time series based on the Euclidean distance. Unlike visualization analyses using the existing shapeletsbased methods (see, e.g., Ye and Keogh, 2009), our visualization, colored lines and plotted position, do not strictly represent the meaning of the final hypothesis because of the nonlinear feature map. However, we can say that the colored lines represent “important patterns”, and certainly make important contributions to classification.
(a) (b) 
7.2 Results for MultipleInstance Data
We selected the baselines of MIL algorithms as miSVM and MISVM (Andrews et al., 2003). Both algorithms are now classical, but still perform favorably compared with stateoftheart methods for standard multipleinstance data (see, e.g., Doran, 2015). Moreover, the generalization bound of these algorithms are shown in (Sabato and Tishby, 2012) because the algorithms obtain a (single) shapeletbased classifier. Hence, the following comparative experiments simulate a single shapelet with theoretical generalization ability versus infinitely many shapelets with theoretical generalization ability. We combined a linear, polynomial, and Gaussian kernel with miSVM and MISVM, respectively. Parameter was chosen from , degree of the polynomial kernel is chosen from and parameter of the Gaussian kernel was chosen from . For our method, we chose from , and we only used the Gaussian kernel. We chose from
. Although we demonstrated both nonsparse and sparse weak learning, interestingly, sparse version beat nonsparse version for all datasets. Thus, we will only show the result on the sparse version because of space limitations. For all these algorithms, we estimated optimal parameter set via 5fold crossvalidation. We used wellknown multipleinstance data as shown on the lefthand side of Table
2. The accuracies resulted from 10 runs of 10fold crossvalidation.The results are shown in the righthand side of Table 2. Because of space limitations, for baselines we only show the results of the kernel that achieved the best accuracy. Although the accuracy of our method for fox data was slightly worse, our method significantly outperformed baselines for the other 4 datasets.
References
 Andrews and Hofmann (2004) Stuart Andrews and Thomas Hofmann. Multiple instance learning via disjunctive programming boosting. In S. Thrun, L. K. Saul, and B. Schölkopf, editors, Advances in Neural Information Processing Systems 16, pages 65–72. MIT Press, 2004.
 Andrews et al. (2003) Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hofmann. Support vector machines for multipleinstance learning. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 577–584. MIT Press, 2003.
 Auer and Ortner (2004) Peter Auer and Ronald Ortner. A boosting approach to multiple instance learning. In JeanFran¥ccois Boulicaut, Floriana Esposito, Fosca Giannotti, and Dino Pedreschi, editors, Machine Learning: ECML 2004, pages 63–74, Berlin, Heidelberg, 2004. Springer Berlin Heidelberg.
 Bagnall et al. (2017) Anthony Bagnall, Jason Lines, Aaron Bostrom, James Large, and Eamonn Keogh. The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery, 31(3):606–660, May 2017. ISSN 1573756X.
 Bartlett and Mendelson (2003) Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2003.
 Carbonneau et al. (2018) MarcAndré Carbonneau, Veronika Cheplygina, Eric Granger, and Ghyslain Gagnon. Multiple instance learning: A survey of problem characteristics and applications. Pattern Recognition, 77:329 – 353, 2018. ISSN 00313203.
 Chen et al. (2015) Yanping Chen, Eamonn Keogh, Bing Hu, Nurjahan Begum, Anthony Bagnall, Abdullah Mueen, and Gustavo Batista. The ucr time series classification archive, July 2015. www.cs.ucr.edu/~eamonn/time_series_data/.
 Chen et al. (2006) Yixin Chen, Jinbo Bi, and J. Z. Wang. Miles: Multipleinstance learning via embedded instance selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):1931–1947, Dec 2006. ISSN 01628828.
 Demiriz et al. (2002) A Demiriz, K P Bennett, and J ShaweTaylor. Linear Programming Boosting via Column Generation. Machine Learning, 46(13):225–254, 2002.
 Dietterich et al. (1997) Thomas G. Dietterich, Richard H. Lathrop, and Tomás LozanoPérez. Solving the multiple instance problem with axisparallel rectangles. Artificial Intelligence, 89(12):31–71, January 1997. ISSN 00043702.
 Doran (2015) Gary Doran. Multiple Instance Learning from Distributions. PhD thesis, Case WesternReserve University, 2015.
 Doran and Ray (2014) Gary Doran and Soumya Ray. A theoretical and empirical analysis of support vector machine methods for multipleinstance classification. Machine Learning, 97(12):79–102, October 2014. ISSN 08856125.
 Gärtner et al. (2002) Thomas Gärtner, Peter A. Flach, Adam Kowalczyk, and Alex J. Smola. Multiinstance kernels. In Proceedings 19th International Conference. on Machine Learning, pages 179–186. Morgan Kaufmann, 2002.
 Grabocka et al. (2014) Josif Grabocka, Nicolas Schilling, Martin Wistuba, and Lars SchmidtThieme. Learning timeseries shapelets. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 392–401, 2014.
 Grabocka et al. (2015) Josif Grabocka, Martin Wistuba, and Lars SchmidtThieme. Scalable discovery of timeseries shapelets. CoRR, abs/1503.03238, 2015.
 Hills et al. (2014) Jon Hills, Jason Lines, Edgaras Baranauskas, James Mapp, and Anthony Bagnall. Classification of time series by shapelet transformation. Data Mining and Knowledge Discovery, 28(4):851–881, July 2014. ISSN 13845810.
 Hou et al. (2016) Lu Hou, James T. Kwok, and Jacek M. Zurada. Efficient learning of timeseries shapelets. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence,, pages 1209–1215, 2016.
 Keogh and Rakthanmanon (2013) Eamonn J. Keogh and Thanawin Rakthanmanon. Fast shapelets: A scalable algorithm for discovering time series shapelets. In Proceedings of the 13th SIAM International Conference on Data Mining, pages 668–676, 2013.
 Mohri et al. (2012) Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. The MIT Press, 2012.
 Platt et al. (2000) John C. Platt, Nello Cristianini, and John ShaweTaylor. Large margin dags for multiclass classification. In S. A. Solla, T. K. Leen, and K. M¥”uller, editors, Advances in Neural Information Processing Systems 12, pages 547–553. MIT Press, 2000.

Renard et al. (2015)
Xavier Renard, Maria Rifqi, Walid Erray, and Marcin Detyniecki.
Randomshapelet: an algorithm for fast shapelet discovery.
In
2015 IEEE International Conference on Data Science and Advanced Analytics (IEEE DSAA’2015)
, pages 1–10. IEEE, 2015.  Sabato and Tishby (2012) Sivan Sabato and Naftali Tishby. Multiinstance learning with any hypothesis class. Journal of Machine Learning Research, 13(1):2999–3039, October 2012. ISSN 15324435.
 Schölkopf and Smola (2002) B. Schölkopf and AJ. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, USA, December 2002.
 Shapiro (2009) Alexander Shapiro. Semiinfinite programming, duality, discretization and optimality conditions. Optimization, 58(2):133–161, 2009.
 Suehiro et al. (2017) Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, and Akiko Takeda. Boosting the kernelized shapelets: Theory and algorithms for local features. CoRR, abs/1709.01300, 2017.
 Tao and Souad (1988) Pham Dinh Tao and El Bernoussi Souad. Duality in D.C. (Difference of Convex functions) Optimization. Subgradient Methods, pages 277–293. Birkhäuser Basel, Basel, 1988.
 Ye and Keogh (2009) Lexiang Ye and Eamonn Keogh. Time series shapelets: A new primitive for data mining. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’09, pages 947–956. ACM, 2009.
 Yu and Joachims (2009) ChunNam John Yu and Thorsten Joachims. Learning structural svms with latent variables. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 1169–1176, New York, NY, USA, 2009. ACM. ISBN 9781605585161.
 Zhang et al. (2006) Cha Zhang, John C. Platt, and Paul A. Viola. Multiple instance boosting for object detection. In Y. Weiss, B. Schölkopf, and J. C. Platt, editors, Advances in Neural Information Processing Systems 18, pages 1417–1424. MIT Press, 2006.
 Zhang et al. (2013) Dan Zhang, Jingrui He, Luo Si, and Richard Lawrence. Mileage: Multiple instance learning with global embedding. In Sanjoy Dasgupta and David McAllester, editors, Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 82–90, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR.
Appendix A Supplementary materials
a.1 Proof of Theorem 1
Definition 1
[The set of mappings from a bag to an instance]
Given a sample .
For any , let be a
mapping defined by
and we define the set of all for as . For the sake of brevity, and will be abbreviated as and , respectively.
We can rewrite the optimization problem (6) by using as follows:
(10)  
sub.to 
Thus, if we fix , we have a subproblem. Since the constraint can be written as the number of linear constraints, each subproblem is equivalent to a convex optimization. Indeed, each subproblem can be written as the equivalent unconstrained minimization (by neglecting constants in the objective)
sub.to 
where and are the corresponding positive constants. Now for each subproblem, we can apply the standard Representer Theorem argument (see, e.g., Mohri et al. (2012)). Let be the subspace . We denote as the orthogonal projection of onto and any has the decomposition . Since is orthogonal w.r.t. , . On the other hand, . Therefore, the optimal solution of each subproblem has to be contained in . This implies that the optimal solution, which is the maximum over all solutions of subproblems, is contained in as well.
a.2 Proof of Theorem 2
We use and of Definition 1.
Definition 2
[The Rademacher and the Gaussian complexity Bartlett and Mendelson (2003)]
Given a sample ,
the empirical Rademacher complexity of a class w.r.t.
is defined as
,
where and each
is an independent uniform random variable in
. The empirical Gaussian complexity of w.r.t. is defined similarly but eachis drawn independently from the standard normal distribution.
The following bounds are wellknown.
Lemma 1
[Lemma 4 of Bartlett and Mendelson (2003)] .
Lemma 2
[Corollary 6.1 of Mohri et al. (2012)] For fixed , , the following bound holds with probability at least : for all ,
To derive generalization bound based on the Rademacher or the Gaussian complexity is quite standard in the statistical learning theory literature and applicable to our classes of interest as well. However, a standard analysis provides us suboptimal bounds.
Lemma 3
Suppose that for any , . Then, the empirical Gaussian complexity of with respect to for is bounded as follows:
Since can be partitioned into ,
(11) 
The first inequality is derived from the relaxation of , the second inequality is due to CauchySchwarz inequality and the fact , and the last inequality is due to Jensen’s inequality. We denote by the kernel matrix such that . Then, we have
(12) 
We now derive an upper bound of the r.h.s. as follows.
For any ,
The first inequality is due to Jensen’s inequality, and the second inequality is due to the fact that the supremum is bounded by the sum. By using the symmetry property of , we have , which is rewritten as
where
are the eigenvalues of
and is the orthonormal matrix such thatis the eigenvector that corresponds to the eigenvalue
. By the reproductive property of Gaussian distribution,
obeys the same Gaussian distribution as well. So,
Comments
There are no comments yet.