1 Introduction
Support vector machines (SVM) [5, 26] are powerful machinelearning techniques for classification. Since 1995, experts in the machinelearning community have shown significant interest in SVM.
Like many predictive modeling algorithms, SVM consists of training, validating, and testing stages. The training stage involves solving a dense quadratic programming or dense convex optimization problem. Since the size of the quadratic problem depends on the total number of observations, generalpurpose quadratic programming solvers are not competitive without specialization, especially when the training data are relatively large.
In order to solve large data set problems, several algorithms have been proposed in the literature. Among them are activeset algorithms [12, 21] and sequential minimal optimization (SMO) [22]. The idea of the activeset algorithms is to decompose the big problem into a series of small tasks. The decomposition splits the training data set into inactive and active parts. The active part is called the “working set” and is normally small. The solver focuses on the working set and keeps the support vectors in the subsequent working set. In fact, SMO is a special case of the activeset algorithm, where the size of the working set is . Activeset algorithms have their own limitations. Because the training for activeset algorithms is sequential and the next iteration depends on the previous results, the problem cannot be easily separated for parallel processing. Moreover, memory and CPU usage increase rapidly as the number of support vectors grows during the training.
In 2003, Ferris and Munson [8] proposed the interiorpoint method by applying the ShermanMorrisonWoodbury [19] formula, which makes it possible to solve very large problems. In [8], the computation of the large matrix is transformed to the computation of a small core matrix through the linear algebra technique. With this technique, the memory required for the quadratic problem is reduced from to , where is the number of observations and is the number of features.
Since 2008, several parallel or distributed computation algorithms have been proposed in the literature. Gertz and Griffin [9] proposed an interiorpoint algorithm and implemented it with the objectoriented package OOQP [20]. Chang et al. [4] applied the interiorpoint method in a distributed computing environment. Chang et al. handled the kernel matrix by a lowrank approximation that uses partial Cholesky decomposition with pivoting, and major computation of the SVM training is performed in the distributed processors. Woodsend and Gondzio [27] proposed a Hybrid MPI/OpenMP parallel algorithm, which uses the interiorpoint algorithm, avoids the dense Hessian matrix, and computes a distributed Cholesky decomposition. The authors claimed that their approach was much better than others during the PASCAL Challenge competition [6]. Unfortunately, one important issue has not been fully discussed in the literature: the use of distributed vectors and distributed vector algebra. Another issue is that the intermachine communication makes the implementation of distributed SVM difficult.
In this paper, we propose a distributed algorithm to solve the large primaldual SVM problem. The distributed SVM algorithm is called highperformance support vector machines (HPSVM). We consider a few important issues in the design of this algorithm. The first concern is the model training time and memory usage. For a large data set, we need an algorithm that can train the model in a reasonable amount of time and use a limited amount of runtime memory. The second concern is the model storage and easy scoring process. The model itself should not be too big to store even if the number of support vectors is large. And the scoring process should be simple. Therefore, finding an algorithm which can train a good model on a large data set in a reasonable amount of time and provide an easy scoring mechanism is critical in the SVM application field. At the same time, it is essential that a good algorithm should be able to take the advantage of cloud computing and the distributed Hadoop file system.
In our implementation, we adopted message passing interface (MPI) for the communication between the master node and the worker nodes. We had two principles in mind when we designed the HPSVM algorithm. First, we knew that data shuffling in the distributed environment can be very expensive. Secondly, we knew that intermachine communication could also significantly slow down the entire process. This paper offers two major contributions. First, we propose a new way to distribute computations to the machines in the cloud. Second, we minimize the communications among the machines in the cloud to maximize the performance. We carefully designed the algorithm so that data shuffling is not required and intermachine communications are minimized. In other words, all data that are saved in each worker node are loaded locally.
The rest of the paper is organized as follows. Section 2 briefly introduces support vector machines. Section 3 talks about the interiorpoint method. Section 4 presents the distributed SVM algorithm. Section 5 provides a complexity analysis of the algorithm. Experiments and their results are shown in Section 6. We draw conclusions in Section 7.
2 Support Vector Machines
In this section, we provide the basic notations used in this paper and describe the SVM classification concept and some formulas.
First we describe the basic notation. Let , be positive integers. In this paper, is the number of observations and is the number of features. We assume that . For each , . We have , and . Let , for each . Let , , , and . Let denote the diagonal matrix , and similarly denote the matrices , , , and . The training data set is . Each row of the training data represents a single observation. The size of the matrices , , , and is .
A support vector machines (SVM) algorithm is a classification algorithm that provides the mapping between the feature space and the target labels. The hyperplane
is used to define the mapping. The training of the SVM is to find and such that the maximum margin between the two hyperplanes and is reached. The decision function(1) 
defines the classifier, where the values
and are mapped to the target labels.The optimization problem is to find , , and that satisfy
(2) 
where is a penalty parameter and is a slack variable. We call equation (2) the primal problem.
The dual problem of the primal problem (2) is
(3) 
If we replace the matrix with , the generalized problem becomes
(4) 
A nonlinear kernel can also be introduced when we solve the quadratic program (4). Let be a function : , for . Then is a dense matrix. We call
the kernel function. The frequently used kernel functions include the polynomial function, the radial basis function (RBF), the sigmoid function, and so on. If the kernel function satisfies Mercer’s condition
[18], then the resulting kernel matrix is symmetric positivedefinite. Thus the quadratic problem (QP) is convex and the global solution exists.3 InteriorPoint Method
Many research publications focus on solving the dual problem or the nonlinear kernel mapping problem (4). The reason is that the optimization process tries to solve a simple quadratic problem with basic linear constraints, which is easier than solving the primal problem. Implementation issues arise as the training data become large. Since the matrix size is and dense, the memory requirement makes it very difficult for a regular solver to handle. To resolve this problem, Ferris and Munson [8] apply the ShermanMorrisonWoodbury [19] formula and transform a large matrix to a small core matrix of size . With this technique, the memory required for the solver is reduced from to , or reduced even further to if the data are not loaded into memory.
The development of the interiorpoint method involves the primaldual Lagrangian function, which is associated with equations (2) and (3),
(5) 
where , , and are Lagrangian multipliers. The primaldual problem is to solve the following system. (For details, see Nocedal and Wright [19], chapter 19, and [8].)
(6)  
(7)  
(8)  
(9)  
(10)  
(11)  
where is an matrix and , , , and are diagonal matrices as defined in the previous section. The interiorpoint algorithm approaches the perturbed equation (10) and equation (11) so that the variables remain positive and approach zero only in the limit. For brevity, we perform the block reduction steps on the unperturbed equations, but the steps in the perturbed case are analogous. The Newton system of equations (6) to (11) is
(12)  
(13)  
(14)  
(15)  
(16)  
(17) 
From the Newton system, combining equations (14) and (16) to eliminate , we obtain
(18) 
where . Combining equations (15) and (17) to eliminate , we obtain
(19) 
where . Furthermore, we remove from equations (18) and (19):
(20) 
We denote and . Thus equation (20) becomes
(21) 
Combining equation (21) with equations (13) and (12), we have
(22) 
Applying simple Gaussian elimination, we have
(23) 
where and . Eliminating from the last equation of (23), we obtain
(24) 
where
(25)  
(26) 
and .
After obtaining from (24), we substitute it back into the first two equations, and we have
(27)  
(28) 
Thus we complete one iteration of the Newton system.
4 Distributed SVM Algorithm
Assuming that the data are separated into blocks and distributed to worker nodes, we have
(29) 
The matrix is also diagonal and can be written in the form
(30) 
For the second term of matrix in (25), we have
(31) 
Similarly, for the third term of matrix , we have
(32) 
where
(33) 
The calculation of is the same:
(34) 
We see that for each worker node , the data or stays in that local node and never moves to other nodes. Each worker node performs its computation with the corresponding parts , , and , and then the results are gathered to the master node through the allreduce actions of the MPI interface. The communication traffic size from the worker node to the master node is . We see that only the master node holds the matrix and the residual of equation (26).
Once the and are ready, the master node computes the from the third equation in (24) and then computes from equation (27), after which the master node broadcasts and to each worker node. The network traffic for this action is only .
Let us look at . From equation (28), we have
(35)  
(36) 
Then each worker node calculates its own portion of ,
(37) 
and calculates , , and . After that, it updates and corresponding residues.
The iteration finishes when and are small and meet the stopping criteria. The support vectors are held in each worker node, and the information is reported to the master node. Therefore, the master node holds all model information and parameters.
Now let us summarize the algorithm for the distributed Newton method. Here we have the computing system, which consists of one master node and multiple worker nodes. The communication between the master node and the worker nodes occurs through the modified MPI. The distributed SVM algorithm is specified in Algorithm 1.
It is worth mentioning that our algorithm also applies in singlemachine mode. Actually the approach of [9] is a special case of our algorithm. In this case, both the master node and the worker node exist in the same machine. You can easily see that the bottleneck is clearly the calculation in equation (31): for big data applications, even vector addition becomes prohibitive if storage and calculation occurs on a single node. For example, suppose one data set has a billion observations. Merely computing the vectors and work vectors on a single machine would require 64G of RAM. Thus, even if the equation (31) is parallelized, the mere calculations of equations (12)–(13), if performed on a single node in serial, would bottleneck the approach for big data problems. In short, any entity whose size is on the order of the number of observations must be stored and updated in distributed fashion. With recent cloud computing technology and the distributed Hadoop file system, the importance of our distributed algorithm is obvious.
5 Complexity Analysis of the Algorithm
We now take a look of the detailed complexity of the algorithm, including memory usage and CPU time.
5.1 Memory Usage
Here we have , the number of total observations; , the number of features; and , the number of worker nodes used. Assume the data are evenly distributed among the worker nodes. From step 1 of the Algorithm 1 in the previous section, we see that the data size in a worker node is . Assume that all the data are loaded into the memory during the training. From step 3, the memory that is needed to hold matrices and residues is . The memory required for step 6 is . Thus the total memory size for the training in each worker node is
(38) 
When , the total memory used for each worker node is . On the other hand, from step 6, the memory needed for the master node is .
For example, if the total number of observations billion, the number of features , and the number of worker nodes , then for each worker node the memory that is needed for the training in each worker node is
(39) 
In our implementation, the whole data set is loaded into memory to improve speed and performance. Note that this paradigm could easily be revised to read data in pages of memory if necessary.
5.2 CPU Time Analysis
For step 3, the time to compute is . For step 6, the time to perform the all_reduce step is . For step 8, the time to solve the equation is . And for step 11, the time needed is . Therefore the total time needed for each Newton iteration is
(40) 
When , from equation (40) we see that for each Newton iteration, the total CPU time is .
Actually, for step 8 of Algorithm 1, we can apply different techniques to solve the equation . The time needed to solve the equation can be reduced from to for some constant value .
Suppose the number of Newton iterations is . Then the total CPU time needed is . In the actual implementation, the multiplethread programming technique can be applied. In this case, if there are processors in each worker node, the total time can be further reduced to .
5.3 Data Integration and Data Access
The training data can be saved on a local disk; in a distributed Hadoop file system; in a distributed database system such as Teradata [25], Greenplum [10], or Aster [1]; and so on. For commercial distributed database systems (such as Teradata, Greenplum, and Aster), the data can be saved on the same worker nodes and can be loaded locally on the fly during the training. This is very important for distributed algorithms: each worker node first computes on its own data, and data movement between work nodes should not happen unless it is necessary. Therefore the data access time can be reduced and the network communication time can also be reduced. In fact, this is one of the most commonly used data access methods in commercial environments. Our HPSVM implementation is currently running on a wide range of platforms including Hadoop, Teradata, Greenplum, Aster, and many others. In this section, we will discuss the data integration and data access strategy that allows HPSVM to run on those platforms successfully.
Our distributed SVM algorithm can run in two modes: symmetric multiple processing (SMP) mode and massively parallel processing (MPP) mode. The following paragraphs briefly introduce these two computing modes.
In SMP mode, multiple CPUs (cores) are controlled by a single operating system, and the resources (such as disks and memory) are shared in the machine. Our algorithm uses multiple concurrent threads in SMP mode in order to take advantage of parallel execution. In SMP mode, you have the flexibility to choose to run our algorithm with a single thread or multiple threads. By default, SMP uses the number of CPUs (cores) on the machine to determine the number of concurrent threads. You can also specify the number of threads to overwrite the default.
In MPP mode, multiple machines in a distributed computing environment (cloud) participate in the calculations. Because we chose to use MPI, the assumption is that the resources (such as disks and memory) are shared only within each machine, not between the machines. One machine communicates to another machine through MPI. In MPP mode, you can run a single thread or multiple threads on a single machine or multiple machines. By default, all the available machines in the distributed computing environment are used, and the number of CPUs (cores) on each machine determines the number of concurrent threads. You can also specify the number of machines or threads to overwrite the default.
We deploy a comprehensive data integration and data access strategy to support the two computing modes. In this strategy, a universal data feeder (UDF) is used between HPSVM and the platform. Our UDF supports a variety of platforms including Hadoop, Teradata, Greenplum, Aster, and many others. This UDF has two data access methods: the SMP data access method and the MPP data access method.
In SMP mode, the UDF supports the SMP data access method. The data can be stored in the local disk, or in a distributed Hadoop file system, or in a distributed database system. The UDF is responsible for bringing the data to the node where computation is performed. Once the computation is finished, the UDF can save the output data to local disk or to other platforms with proper formats.
In MPP mode, the UDF supports the MPP data access method. The MPP data access method enables (but discourages) data movement between the computing nodes in the cloud. Data movement between nodes can be expensive and slow. Therefore, the ideal situation is to have the computation happen in the worker node that has the data. The master node is responsible for job scheduling and for aggregating the results. However, data movement and reshuffling are sometimes required. Therefore, the UDF allows data movement and reshuffling between worker nodes. In addition, the UDF allows the client to upload data to the cloud and perform computation in the cloud.
In summary, our universal data feeder (UDF) allows HPSVM to run on a wide range of platforms successfully.
6 Experiments
In this section, we test our HPSVM algorithm, and we apply it to a number of applications. First, we apply HPSVM to some realworld classification problems and compare it with the R package on several public data sets. The results demonstrate that our HPSVM implementation yields accuracies similar to or better than those of R implementation, but HPSVM runs much faster than the R implementation on large data sets. Then, we show that HPSVM scales very well on a very big data set as the number of nodes in a distributed environment is increased. Finally, we simply compare HPSVM with the Spark [16] implementation.
6.1 Applications of HPSVM and Comparison with LIBSVM Package in R
We apply HPSVM to some realworld classification problems and compare it with the LIBSVM package in R on several public data sets. These experiments were conducted on a nondistributed system that uses Windows 7, 16GB of RAM, and an i74770 processor. Our HPSVM implementation yields accuracies similar to those of R implementation, but the run times of HPSVM are several times faster on large data sets. In the following paragraphs, we briefly introduce the data sets that we used.
The Mushroom data set is from the UCI Machine Learning Repository [15] and consists of 8,124 total observations. We partitioned this data set with an 80/20 split, giving us 6,499 training observations and 1,625 test observations. The target is whether a mushroom is edible, ‘e’, or poisonous, ‘p’.
The Adult data set (also known as the Census Income data set) is also from the UCI Machine Learning Repository [15] and is already partitioned into training and testing sets. The training set size is 32,561 observations, and the testing set is 16,281 observations, for a total of 48,842 observations. The target is whether an adult has an income greater than 50,000 dollars.
The Face data set is from the CBCL face database [2]. This data set is already partitioned into a training set of 2,429 faces and 4,548 nonfaces, and a testing set of 472 faces and 23,573 nonfaces. The target is whether or not an image is a face.
We present the overall timing and accuracy (correct classification) in Table 1.
Data Set Name  Features  Nobs  R Time (sec)  R Accuracy  HPSVM Time (sec)  HPSVM Accuracy 
Mushroom Train  22  6,499  1.23  100.0  0.96  100.00 
Mushroom Test  22  1,625  0.05  100.0  0.09  100.00 
Face Train  361  6,977  6.47  99.89  9.53  99.36 
Face Test  361  24,045  5.37  97.33  1.11  97.42 
Adult Train  14  32,561  77.30  85.17  9.88  85.26 
Adult Test  14  16,281  5.65  85.25  0.31  85.27 
You can see that as the number of observations increases the relative speed of our interior point SVM becomes much faster as compared to the libsvm. It is worthwhile to note that in the Face data set, the libsvm training was faster than our interior point HPSVM implementation. Recalling our CPU time analysis for a single machine, we see that the run time is . This scales linearly with , which allows for quick computations as the data size increases in observations. Our implementation scales with the square of the number of features, and thus for this small data example, the libsvm trains faster than our implementation.
6.2 Scalability of HPSVM
In this experiment, we demonstrate that our HPSVM algorithm scales well as the number of computing nodes increases. This is very important for training in large data sets. We apply our HPSVM algorithm to a data set that has approximately 84 million observations and contains 715 features. The computation is in a distributed environment, and we show the timing results from changing the number of nodes that we use to run our HPSVM algorithm. The result is presented in Table 2.
Number of Nodes  Training Time (sec) 

20  631.67 
60  378 
100  247.33 
6.3 Comparison of HPSVM to Spark
Spark [16]
is a popular opensource machine learning library, which includes an implementation of SVM that uses the stochastic gradient descent (SGD) algorithm
[24]. We run a test case to compare it with our HPSVM.We set up the testing environment with five nodes (machines). Each node has 32 CPUs and 256GB memory. The testing data is called the “Glass data set”. Here is the simple description of the data.
The Glass data set recorded numerous measures in a semiconductor manufacturing stream. The data came from the engineers who work on developing the optimal semiconductor production environment. They developed a sophisticated system that controlled a large number of variables, such as temperature, air pressure, air humidity, and so on. In the experiment, the engineers adjusted the environmental conditions and then checked to see whether the semiconductors produced under such an environment could satisfy certain requirements. There are 1001 continuous predictors and 1 million observations in the Glass data set. The response variable is binary (whether a semiconductor product passes the test or not), and all the predictors are standardized to the same scale before training.
We list the overall timing and accuracy (correct classification) in Table 3. From this table, you can see that HPSVM runs faster than Spark and achieves better results.
Training Time (sec)  Accuracy (%)  

SPARK  247  90.5% 
HPSVM  69  99.78% 
7 Conclusion
In this paper, we present a highperformance support vector machines (HPSVM) algorithm that scales well in large data sets. We implemented this algorithm with MPI. The implemented algorithm is now running on various systems, including a distributed Hadoop file system and a distributed database system (such as Teradata, Greenplum, Aster, and so on). We compare the accuracy of our implementation with the stateoftheart SVM technique implemented in R on some public data sets. When the data set is large, experiments show that our algorithm scales very well and generates better models.
References
 [1] Aster Data Systems, http://en.wikipedia.org/wiki/Aster_Data_Systems, retrieved in March 2018.
 [2] CBCL Face Database #1, MIT Center For Biological and Computation Learning, http://www.ai.mit.edu/projects/cbcl.old, retrieved in March 2018.
 [3] C. C. Chang and C. J. Lin, LIBSVM: A library for support vector machines, ACM Trans. Intell. Syst. Technol. 2, 3, Article 27, April 2011, 27 pages.
 [4] E. Chang, K. Zhu, H. Wang, H. Bai, J. Li, Z. Qiu, and H. Cui, Parallelizing support vector machines on distributed computers. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 257264, Cambridge, MA, MIT Press. 2008.
 [5] C. Cortes and V. Vapnik. Supportvector network. Machine Learning, vol. 20, pp. 273297, 1995.
 [6] A. Bordes, L. Bottou and P. Gallinari, SGDQN: Careful QuasiNewton Stochastic Gradient Descent, Journal of Machine Learning Research 10:17371754, 2009.
 [7] R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang, and C. J. Lin, Liblinear: A library for large linear classication. Journal of Machine Learning Research 9, 18711874, 2008.
 [8] M. Ferris and T. Munson, interior point methods for massive support vector machines. SIAM Journal on Optimization, 13(3):783804, 2003.
 [9] E. M. Gertz and J. D. Griffin, Using an iterative linear solver in an interiorpoint method for generating support vector machines, Comput Optim Appl, DOI 10.1007/s105890089228z, 2008.
 [10] Greenplum Database System, http://en.wikipedia.org/wiki/Greenplum, retrieved in March 2018.
 [11] C. J. Hsieh, K. W. Chang, C. J. Lin, S. S. Keerthi and S. Sundararajan, A dual coordinate descent method for largescale linear SVM. In ICML, 2008.
 [12] T. Joachims, Making largescale support vector machine learning practical. In: Schlkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods – Support Vector Learning, pp. 169184. MIT Press, Cambridge 1998.
 [13] T. Joachims. Training linear SVMs in linear time. Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006.
 [14] S. Keerthi, S. Shevade, C. Bhattacharyya and K. Murthy, Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput. 13, 637649, 2001.
 [15] Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository http://archive.ics.uci.edu/ml. Irvine, CA: University of California, School of Information and Computer Science.
 [16] Apache Spark, https://spark.apache.org, retrieved in March 2018.

[17]
C. Y. Lin, C. H. Tsai, C. P. Lee and C. J. Lin, Largescale logistic regression and linear support vector machines using Spark, IEEE International Conference on Big Data, 2014.
 [18] J. Mercer, Functions of positive and negative type and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society A, Vol. 209, issue 441458, 1909.
 [19] J. Nocedal and S. Wright, Numerical Optimization, 2nd edition, Springer, 2000.
 [20] OOQP Package, http://pages.cs.wisc.edu/~swright/ooqp, retrieved in March 2018.

[21]
E. Osuna, R. Freund, and F. Girosi, An improved training algorithm for support vector machines. In J. Principe, L. Gile, N. Morgan, and E. Wilson, editors, Neural Networks for Signal Processing VII, Proceedings of the IEEE Workshop, pages 276285. IEEE, 1997.
 [22] J. Platt, Fast training of support vector machines using sequential minimal optimization. In Bernhard Scholkopf, Christopher. J. C. Burges, and Alexander. J. Smola, editors, Advances in Kernel Methods: Support Vector Learning, pages 185208. MIT Press, 1999.
 [23] R. Rabenseifner, G. Hager and G. Jost, Hybrid MPI and OpenMP parallel programming, Supercomputing, Denver, CO, 2013

[24]
S. ShalevShwartz, Y. Singer and N. Srebro, Pegasos: Primal estimated subgradient solver for SVM. Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR, 2007.
 [25] Teradata Database System, http://en.wikipedia.org/wiki/Teradata, retrieved in March 2018.

[26]
V. Vapnik, The Nature of Statistical Learning Theory. SpringerVerlag, New York, USA, 1995.
 [27] K. Woodsend and J. Gondzio, Hybrid MPI/OpenMP parallel linear support vector machine training, Journal of Machine Learning Research 10, 19371953, 2009.
Comments
There are no comments yet.