1 Introduction
The concept of federated learning is proposed by Google in [14]
. The main idea is to conduct model training using data sets hosted by distributed agents while preventing data leakage. It allows each agent to generate independently local model update with the hosted data instances. The distributed agents only share the local model updates with the central server, where the local updates are averaged to estimate the global drift of model parameters. The federated optimization process is a doubleedged sword. For one thing, as no explicit data transfer is conducted, federated learning provides a strong barrier protecting training data privacy. For the other thing, federated optimization can be easily biased by the local noise corruption on any of the agents. Such local noise corruption can appear as outliers that are relatively easy to identify, or as systematic biases. The latter bias usually corrupts the majority of training data, which have much sever adverse effects on learning
[5]. They are much harder to be detected, because the systematically corrupted data instances appear selfconsistent. We propose a novel algorithm to mitigate the adverse impact of systematic noise corruption to achieve robust federated training via Collaborative Machine Teaching, thus named CoMT. At the core of CoMT, the local agents act as distributed teachers, while the center serve is the student cotaught by the distributed teachers. The teachers are organized to jointly select the most informative subset of the hosted distributed data. The data corruption is debugged by a set of “trusted data instances” owned by each teacher and verified by domain experts. To minimize the demand of experts’ verification, the trusted datasets are usually small in size, insufficient for learning but important on guiding the selection of trustable instances.The collaborative teaching activity and the federated model training are unified within a joint optimization process. It defines an explicit interaction between teaching and learning: the distributed teachers collaborate to generate an appropriate training subset based on the trusted instances. The federated learners train the model on the carefully tuned subset and force the learned model to agree with the trusted data instances throughout the optimization process. The federated teachinglearning process aims to achieve three goals simultaneously. Firstly, the trusted instances guides the model training process despite the systematically corrupted training data. The goal of classic machine teaching methods focus on tuning training data to force the learned model parameter to be as close as possible to the given value. It is far from being practically useful for model training, as we barely know the parameter value of the target model before training. In contrast with the previous machine teaching methods, the proposed CoMT algorithm can produce directly an applicable model as the output of robust federated training. Second, the joint optimization of the collaborative teaching and learning process require no explicit data transferring. Therefore the proposed CoMT is by design privacypreserving. Thirdly, the proposed method can provide highly scalable computing over largescale data sets, whereas previous machine teaching methods have the notorious issue of expensive computational cost. In this sense, the proposed method is more suitable for realworld applications.
2 Related Work Discussion and Our Contributions
2.1 Machine Teaching
Machine teaching was originally proposed in [11]. Most of the works focus on studying a key quantity called the teaching dimension, i.e., the size of the minimal training set that is guaranteed to teach a target model to the student. For example, [11] provides a discussion on the teaching dimension of version space learners, [15]
analyzes the teaching dimension of linear classifiers, and
[23] studies the optimal teaching problem of Bayesian learners. In standard machine teaching, the student is assumed to passively receive the optimal training set generated by teacher. Later works consider other variants of teaching setting, e.g., in [24, 2], the student and the teacher are allowed to cooperate in order to achieve better teaching performance. More theoretical studies about machine teaching can be found in [7, 6, 12, 17, 16]. As another popular application, machine teaching can be also used to perform data poisoning attacks of realworld machine learning systems. In such cases, the teacher is viewed as a nefarious attacker who has a specific attack goal in his mind, while the student is some machine learner, then the teaching procedure corresponds to minimally tampering the clean data set such that the machine learner learns some suboptimal target model on the corrupted training set. Some adversarial attack applications can be found in [19, 1].Instead of artificially designing the training set, Super Teaching [18] conducts subset selection over an i.i.d. training set. The identified informative subset is then used for model training. Mathematically, Super Teaching is defined as:
Definition 1 (Super Teaching)
Let be an item iid training set, and be a teacher who selects a subset as the training subset for learner . Let and be the model learned from and respectively. Then is a super teacher for learner if such that
(1) 
where
is a teaching risk function, the probability is with respect to the randomness of
, and is a sequence called the super teaching ratio.The idea of selecting an informative training subset is also explored in [8]. In the proposed LearningtoTeach
framework, the teacher conducting subset selection is modeled with Deep Neural Nets (DNN). The goal of teaching is to select training samples to make faster convergence of the DNN based learner. The teacher network is tuned via reinforcement learning with reward signals encouraging fast descent of the classification loss of the learner. In contrast to learningtoteach, super teaching in
[18] focuses on a more general teaching goal, which drives the student to learn the expected model.Another machine teaching method closely related to our work is Debugging Using Trusted Items (DUTI) [22]. DUTI assumes that only the labels of the training data are corrupted. Noise over the labels, such as class label flipping noise for classification and continuousvalued noise for regression, is considered as the bug of training data. DUTI formulates a twolevel optimization problem to conduct the teaching task. It is designed to learn to inject the smallest crafting into the potentially corrupted labels, guided by a small portion of the trusted items. The injected changes are then provided to a domain expert as suggested bug fixes and used to identify candidates of outliers in the training data. The mathematical definition of DUTI is given as follows:
Definition 2 (Duti)
Let as the features and labels of training data. can be class labels or regression targets. The outliers of the training data corrupt . denote the trusted data instances. Let be the learning paradigm of the learner, e.g., a regularized empirical risk minimizer with strongly convex and twice differentiable objective function. Conceptually, DUTI solves the twolevel optimization problem:
(2) 
where denotes the distance between the potentially buggy labels and the fixed labels . denotes the model learned by fixed training data. In general, DUTI finds an alternative labeling for the training data, which stays as close as possible to
. The model trained with the original feature vectors
and the alternative labels correctly predicts the labels of the trusted items , and the alternative labeling is selfconsistent.Our work is aligned with Super Teaching [18] and Learning to Teach [8] on selecting training instances without artificially crafting, but involves multiple distributed teachers, rather than a single teacher in [18, 8]. We also face the challenge of systematically corrupted training data and , while DUTI [22] only addresses the corruption on labels . Again, all these machine teaching study including DUTI works in a single teacher setting, while ours focuses on the collaborative teaching a set of distributed agents. We thus discuss the related work of learning in a distributed environment, Federated Learning, in next subsection. Then we will summarize our contribution in subsection 2.3.
2.2 Federated Learning
Federated learning [14] is a communicationefficient and privacypreserving distributed model training method over distributed agents. Each agent hosts their own data instances and is capable of computing local model update. In each round of model training, the training process is first conducted on each node in parallel without internode communication. Only the local model updates are aggregated on a centralized parameter server to derive the global model update. The aggregation is agnostic to data distribution of different agents. Neither the centralized server, nor the local agents have visibility of the data owned by any specific agent. In [13, 21], a communicationefficient distributed optimization method named CoCoA is proposed for training models in a privacypreserving way. CoCoA applies blockcoordinate descent over the dual form of the joint convex learning objective and guarantees sublinear convergence of the federated optimization. Furthermore, the optimization process does not require to access data instances hosted by each node. Only local dual variable updates need to transfer from local nodes to the central server. This property makes CoCoA appropriate for federated training.
A federated data poisoning attack is recently proposed in [3]. This work assumes that only one malicious agent conducts noncolluding adversarial data poisoning over the data instances that it hosts. Our method is distinct from this work since we study consensus collaboration of multiple teachers in tuning training data. In addition, our method can be used as a data cleaning process to mitigates the effects of malicious noise injection.
2.3 Our contributions
Our work extends the horizon of machine teaching to deliver a robust federated learning scenario. The major contributions of our work can be concluded as in following four aspects.
Firstly, unlike the previous machine teaching methods with a single teacher, we organize multiple teachers as collaborative players in both teaching and learning. Data instances hosted by one teacher cannot be accessed by the others, which prevents the risk of data leakage in the collaboration but also defines a challenging teaching task. Teaching agents can only access their own data, but they are expect to achieve consensus in the joint teaching collaboration.
Secondly, we assume a more challenging scenario in which over of training data instances are corrupted by systematical noise. The trusted instances are only a small fraction of the potential buggy data. Furthermore, we assume that both features and labels can be noisy. The proposed method can handle a mixtures of both cases with a computationally efficient optimization process. In contrast, DUTI only copes with the noisy labels and suffers from the issue of scalability facing largescale data sets.
Thirdly, distributed teachers also learn to change the features of potentially corrupted data with the trusted instances. Enabling more teaching flexibility helps to deliver better teaching performance.
Finally, federated teaching and learning in the proposed method are incorporated into a joint distributed optimization problem. The model learned with the tuned training data is produced directly as the output. Coupling teaching and learning together guides the teaching activity with a learningperformance oriented objective. It helps to minimize the changes injected to the training data while still produce satisfying learning performance, in order to reduce the risk of introducing unexpected artefacts into the data.
3 Collaborative Machine Teaching using Trusted Instances (CoMT)
3.1 Problem definition
Assuming that there are local agents noted as (k=1,2,3,…,K), each of them hosts a buggy training set composed by . We use and to denote all the features and labels of one training set. In the setting of this work, both and can be potentially contaminated by noise. In addition, we assume that each agent hosts a small portion of trusted data instances , where . These trusted instances are verified by domain experts at considerable expense. The labels and can be continuous for regression or discrete for classification. The mathematical definition of the proposed collaborative teaching process is given as in Eq.3:
(3) 
measures Euclidean distance between the changed feature and . is a binary indicator denoting whether the corresponding instance is selected for training. denotes L1norm of . denotes the joint learning paradigm to train the model parameterized by . Minimizing Eq.3 achieves to simultaneously select an informative subset of training data and inject the minimum changes over the features of the selected data instances. The tuned subset of training data is used to conduct federated training via the learning paradigm . According to the constraint, the learnt model should predict consistent labels on both trusted instances and the tuned training data.
3.2 Dual form of the collaborative teaching
The dual objective of the learning paradigm for the learner gives:
(4) 
where
is the Fenchel dual of the loss function
. Let denote the total number of training instances owned by the teachers. denotes aggregated data matrix with each column corresponding to a data instance. The duality comes with the mapping from dual to primal variable: as given by the KKT optimality condition. is the dimensional dual variable, where each denotes the dual variable corresponding to the th data instance hosted by teacher . If diminishes, the corresponding data instance consequently has no impact over the dual objective in Eq. (4). Thus, only the data instances with nonzero dominates the training process. Motivated by this observation, we propose to formulate the objective of the proposed collaborative teaching using the dual form of the learning paradigm in Eq.(5).(5) 
where balances the impact of the trusted data instances in the joint optimization process. Larger puts more weight over the learning loss of the trusted data instances, which thus sets more strict constraints over the consistency between the learned model and the trusted data. is the regularization weight of the adaptive norm penalization enforcing sparsity of to perform the subset selection. penalizes the magnitudes of the changes injected by the teachers into the buggy training features of the selected subset. An appropriately chosen helps to prevent too much artefacts introduced to the tuned training instances, while still enables the tuning flexibility to deliver efficient teaching. The teaching objective given in Eq. (5) is convex according to the property of LegendreFenchel transform. Thus solving Eq. (5) with gradient descent guarantees fast convergence. As enforced by the norm regularization over , the nonzero entries of in Eq. (5) correspond to the selected data instances for the learner to calculate the model parameters. In practice, the learned has a small fraction of entries with dominant magnitudes, and the rest are negligible. We next demonstrate how to apply the proposed CoMT
method to two prevalent learners, L2regularized Logistic Regression (LR) and Ridge Regression (RR). It is worth noting that
CoMT is not constrained to the two linear models. It is extendable to federated kernelized learners by introducing random Fourier features [20]. We leave this extension for future study.3.3 CoMT for Ridge Regression and Logistic Regression
We can instantiate Eq.5 to Ridge Regression by inserting the dual form of Ridge Regression, which gives:
(6) 
where denotes the teaching crafting applied to the buggy training data . is the dual variable vector. The magnitude of each element in
measures the contribution of each training data instance hosted by one agent in forming the linear regression parameter
. The larger is, the corresponding and the crafting variable will contribute more to recover the regression parameter . As shown by Eq.6, the first three terms enforce the consistency between the learned regression parameter and the changed training data instances and . They are derived as the dual definition of Ridge Regression. The forth term is designed to enforce the consistency of the learnt model to the trusted instances .Similarly, we can define CoMT for Logistic Regression in Eq.(7) by combing the dual form of L2regularized Logistic Regression with Eq.(5).
(7) 
4 CoMT optimization
We propose to combine BlockCoordinate Descent (BCD) proposed in [21] and Alternating Direction Method of Multiplier (ADMM) to solve the optimization problem in Eq. (6) and Eq. (7). In each round of the descent process, we conduct minimization with w.r.t. and belonging to the th local agent, while fixing all the other and . We take the optimization process of CoMT for RR for example.Similar steps are applicable to the case of LR.
We first reformulate Eq.6 into the equivalent form shown in Eq.8:
(8) 
Following scaled ADMM, we can express Eq.(8) as:
(9) 
where is the augmented Lagrangian parameter and is the scaled dual variable of ADMM. The pseudo codes of the optimization procedure is given in Algorithm 1.
We use and to denote the value of the disjoint block estimated at the th iteration. They correspond to the data instances hosted by the th local agent. In each round of iteration, we update the dual variables and for each of the agents in parallel. We assume incremental updates and are calculated based on the value of and . The incremental updates indicate the descent direction minimizing the objective with respect to the block and . They are estimated by minimizing the local approximation to Eq. (8), where is represented as the additional combination . is the learning rate adjusting the descent step length for the block and .
In Algorithm.1, updating each block and does not require the knowledge of the values for the other blocks. All the local updates require only the values of the variables derived from the last round, , and the globally aggregated broadcast from the central server. Similarly, updating can be conducted using federated optimization. As such, optimization w.r.t. , and in Algorithm.1 can be conducted in parallel without inter communication among local agents.
It is easy to find that: i) private data hosted by any local agent is kept on its own device in the collaboration stage. In other words, no training data is transferred directly between agents. Furthermore, updating only needs to aggregate local updates on the central server to derive and broadcast it back to the agents. It is difficult to infer any statistical profiles about the training data hosted by local agents solely based on the aggregation , which prevents the risk of unveiling private data of one local agent to the others in the teaching and learning collaboration. ii) Information sharing between local agents is conducted by updating the global variable and and then broadcasting the updated value to all agents in Algorithm.1. Communication for teaching and learning collaboration is thus efficient, with the cost of in each round of iteration. Moreover, according to [13], updating of local teachers can be triggered with asynchronous parallelism, which allows to organize efficient collaboration of teaching and learning with large number of agents and tight communication budget. Note is tuned to be a sparse vector. Most entries of are driven to zeros. We identify the indices of the data instances corresponding to the entries of with the largest nonzero magnitudes. Only the selected data instances are used to calculate the model parameter. Once and are derived as the output from Algorithm.1, we can aggregate to obtain the model parameter as . denotes the identified data subset hosted by the local agent . The parameter is calculated by globally ranking of ’s entries on the central server and then aggregating local estimation shared by each agent.
5 Experimental Results
5.1 Experimental setup
We test the proposed CoMT algorithm with both synthetic data set and realworld benchmark data sets (summarized in Table.1
). To construct the synthetic dataset, we first create clusters of random data instances following normal distribution, and then assign one half of the clusters as positive and the other half as negative, to construct a balanced binaryclass data set. The regression dataset is obtained by applying random linear regressor to the created
to get the regression target . The dimensionality of each data instance is fixed to . Without loss of generality, we set the number of agents to 5 in the experimental study. To generate i.i.d.data instances, the mean and variance of the normal distribution for data generation are kept the same for different agents. The summary of the realworld data sets is shown by Table.
1, which are used to evaluate practical performances of the proposed method over largescale realworld data samples. In the empirical study on both synthetic and the realworld data, each local agent is assumed to host the same amount of training instances. The realworld dataset is randomly shuffled and assigned to each local agent.In the experimental study over the synthetic data, we first randomly extract 40% of the whole data set as the training data. These training data are corrupted then to generate buggy training data and assign to all local agents. we choose
of the whole data set as the trusted instances hosted by the local agents. They are considered to be free from noise. The rest of the data are used as testing instances to evaluate the performances of the learned classification or regression model. To construct buggy training sets for local agents, in the case of Ridge Regression, we add random Gaussian distributed noise to both features and regression targets as in Eq.(
10):(10) 
where and are the original feature and target of the regression training instances before noise injection. and
are two independently generated random variables, following standardized normal distribution.
and denotes the averaged magnitudes of the features and targets and . and are the magnitudes of the injected noise corruption to each feature vector and target variable. In the case of Logistic Regression, we choose to define two scenarios of buggy training data. First, we fix the class labels of training data, while follow the same protocol in Eq.(10) to add Gaussian distributed noise to feature vectors. Second, we fix the training features and randomly flip of the class labels to generate buggy training data for Logistic Regression. To measure the collaborative teaching performances, we repeat the process of random sampling of training and injecting noise for 20 times. The mean and variance of squared and AUC derived on the testing instances are used as the performance metrics of the regression and classification model.Dataset  No. of Instances  No. of Features 

IJCNN  49,990  22 
CPUSMALL  8,192  12 
We involve the following baselines to conduct comparative study:

In both regression and classification case, we use only the trusted instances of each agent to conduct federated model training. The derived models are evaluated on the testing instances. We name it with TIonly. The purpose of involving TIonly is to confirm that learning jointly with both the buggy data and the trusted items helps to achieve better performances.

We simply the proposed CoMT method shown in Eq.(5) by dropping the data change operation. The simplified method, noted as CoMTsubset, only selects subsets for model training and will be compared to evaluate the effectiveness of buggy data correction. Mathematically, it is defined as:
(11) 
DUTI [22] is adapted to the distributed computing scenario, by first running in a standalone way on each local agent to estimate the correction of the buggy training data. Then the crafted training data by DUTI are sent for the federated model training.

REWEISE is a robust linear regression method proposed in [10]. REWLSE is a type of weighted least squares estimator with the weights adaptively calculated from an initial robust estimator. We follow the recommended parameter setting and adopt REWEISE on each local agents independently to derive local model updates. The resulted local model updates are aggregated to calculate model parameters globally as in federated learning.

RloR[9] and rLR [4] are proposed as a robust logistic regression method against outliers in feature space and label flipping noise, respectively. We use them as baselines in the two scenarios of learning of Logistic Regression with corrupted training data. Since both methods are not designed for distributed computing, we follow the same federated learning protocol as adapting DUTI and REWEISE.
We believe that the keys to the success of the proposed CoMT method are twofolds. Firstly, it conducts collaborative machine teaching, which allows to share information of trusted instances and noise patterns among. This helps to jointly tune buggy training data to deliver robust model training. Secondly, it combines seamlessly the data tuning based machine teaching with model training. The combination explicitly defines a learningperformance driven machine teaching, which helps to optimally tune data for better learning. Compared to the proposed method, none of the baseline methods except CoMTsubset considers both of the key designs in the collaborative architecture. Therefore we expect the proposed method to deliver a superior model learning performances over the baselines. All the methods are implemented in Python 2.7 with Numpy and Scipy packages on a 5core AWS EC2 public cloud server, with one core per local agent.
5.2 Results on synthetic data
We choose to generate 50000 synthetic data samples and assign them to the local agents, from which we extract and generate buggy training data, trusted instances and testing instances. We vary from 0.25%, 0.5% to 1% to increase gradually the fraction of trusted instances on each local agent. and are changed from 0.3, 0.6 to 1.2 respectively to simulate increasingly larger magnitudes of noise corruption. Table.2 demonstrates the result. The proposed CoMT method only needs a subset of the distributed training instance to finally estimate the model parameter. Therefore, in the results produced by CoMT, we also record the fraction of the selected training data instances with respect to the whole training data set. The running time of each method is noted as and used to evaluate the computational efficiency on the largescale data set.
According to the experimental results, the proposed CoMT method and its simplified version produce more accurate models compared to the baseline methods involved in the study, given very small portions of trusted instances. Especially when , the proposed CoMT and CoMTsubset can achieve over 1.7 times Rsquared scores and 1.1 times AUC scores compared to the baselines. In both classification and regression scenarios, the baseline methods can also produce better performances with increasing larger . The reason is that larger indicates more available trusted instances for training. Nevertheless, the model learnt via CoMT and CoMTsubset provide consistently superior and stable performances with larger . Furthermore, benefited from the subset selection module embedded in the objective function of CoMT, the proposed method only needs 70% on average of the whole training set to conduct training. The capability of subset selection is helpful in the applications when local computing power is limited, as local agents are not obliged to load all the training data instances to compute the model parameters. Compared to CoMTsubset, CoMT achieves to obtain more accurate model with equal or even less training instances selected. This observation is consistent with our expectation: CoMT is able to conduct subset selection and crafting simultaneously. Thus it allows more flexibility to tune training set, which leads to better overall model performances.
(%)  CoMT  CoMTsubset  TIonly  DUTI  REWEISE  

0.3  0.10  0.98  1.73e5  0.70  35.16s  0.95  2.25e5  0.70  33.50s  0.56  3.47e4  12.15s  0.57  9.95e5  223s  0.55  4.30e4  20.50s 
0.50  0.98  8.00e6  0.70  32.20s  0.96  7.43e6  0.75  33.50s  0.89  1.00e4  11.50s  0.85  8.37e4  230s  0.82  7.93e4  22.20s  
1.00  0.99  1.12e6  0.65  36.00s  0.96  1.17e6  0.70  37.00s  0.95  7.56e6  12.50s  0.88  8.02e6  255s  0.84  1.26e6  22.10s  
0.6  0.10  0.99  1.03e6  0.70  39.50s  0.95  1.43e6  0.75  36.34s  0.55  4.88e5  15.00s  0.54  5.00e5  289s  0.57  1.25e4  24.30s 
0.50  0.99  1.57e5  0.70  38.70s  0.95  1.35e5  0.70  40.20s  0.89  1.46e4  15.20s  0.85  5.32e5  302s  0.83  8.07e5  23.30s  
1.00  0.99  4.70e6  0.65  40.55s  0.95  3.50e6  0.70  39.35s  0.96  9.82e6  17.00s  0.89  2.04e5  320s  0.89  8.98e6  23.50s  
1.2  0.10  0.99  7.93e6  0.70  44.50s  0.96  8.20e6  0.70  44.00s  0.54  6.30e5  18.45s  0.56  7.55e5  335s  0.54  4.35e5  25.37s 
0.50  0.99  1.92e6  0.65  45.70s  0.96  1.55e6  0.70  46.40s  0.91  6.87e5  18.20s  0.86  2.54e5  368s  0.85  3.05e5  25.04s  
1.00  0.99  5.83e6  0.70  47.45s  0.96  7.86e6  0.70  52.50s  0.96  8.81e6  19.20s  0.89  1.02e5  370s  0.90  2.21e5  26.30s 
(%)  CoMT  CoMTsubset  TIonly  rLR  

0.3  0.10  0.73  6.06e4  0.80  67.60s  0.72  4.65e4  0.80  67.00s  0.67  7.3e4  32.10s  0.68  4.83e4  48.20s 
0.50  0.88  4.07e6  0.65  58.50s  0.86  6.16e6  0.70  59.20s  0.83  6.19e6  29.84s  0.73  4.65e6  43.00s  
1.00  0.87  6.20e6  0.65  62.10s  0.86  8.75e6  0.70  64.00s  0.83  8.75e6  25.34s  0.73  6.43e6  51.13s  
0.6  0.10  0.86  1.06e4  0.70  69.20s  0.87  5.53e5  0.70  72.30s  0.82  1.30e3  26.21s  0.79  5.53e4  36.10s 
0.50  0.93  2.00e4  0.70  62.35s  0.93  3.14e4  0.80  63.05s  0.88  2.19e4  25.35s  0.75  2.35e4  41.21s  
1.00  0.90  1.57e4  0.75  59.20s  0.88  1.80e4  0.75  60.30s  0.85  5.57e5  25.35s  0.72  3.70e5  51.00s  
1.2  0.10  0.89  4.99e4  0.70  63.70s  0.88  4.23e4  0.75  61.25s  0.86  1.24e3  29.20s  0.79  3.27e3  32.20s 
0.50  0.91  1.53e5  0.70  62.00s  0.88  1.53e5  0.70  62.00s  0.86  2.55e6  30.50s  0.82  2.32e5  38.30s  
1.00  0.85  1.51e5  0.75  67.10s  0.83  2.54e5  0.75  72.00s  0.81  2.12e5  26.44s  0.77  2.61e6  53.00s 
(%)  CoMT  CoMTsubset  TIonly  DUTI  RloR  

0.3  0.10  0.77  5.16e4  0.70  63.20s  0.75  8.68e4  0.70  62.80s  0.72  7.30e4  12.00s  0.63  4.58e5  197.32s  0.69  4.52e5  25.30s 
0.50  0.82  3.94e5  0.65  60.20s  0.83  3.40e5  0.70  58.35s  0.68  5.98e5  12.10s  0.67  6.96e5  154.10s  0.68  1.31e5  32.40s  
1.00  0.82  1.30e4  0.70  58.00s  0.80  1.12e4  0.70  58.00s  0.76  2.54e4  15.10s  0.72  2.12e4  198.20s  0.68  4.02e4  27.44s  
0.6  0.10  0.79  5.16e4  0.70  63.20s  0.79  3.05e4  0.70  61.60s  0.74  7.30e4  12.00s  0.72  4.58e5  197.32s  0.72  4.52e5  25.30s 
0.50  0.86  3.94e5  0.75  60.20s  0.83  5.09e5  0.70  62.30s  0.80  5.98e5  12.10s  0.77  6.96e5  154.10s  0.76  1.31e5  32.40s  
1.00  0.85  1.30e4  0.70  58.00  0.83  1.93e4  0.70  60.00  0.79  2.54e4  15.10s  0.77  2.12e4  198.20s  0.78  4.02e4  27.44s  
1.2  0.10  0.87  5.16e4  0.70  63.20s  0.87  5.16e4  0.75  63.00s  0.84  7.30e4  12.00s  0.80  4.58e5  197.32s  0.80  4.52e5  25.30s 
0.50  0.89  3.94e5  0.70  60.20s  0.87  2.55e5  0.70  62.45s  0.83  5.98e5  12.10s  0.80  6.96e5  154.10s  0.82  1.31e5  32.40s  
1.00  0.87  1.30e4  0.70  58.00s  0.84  4.57e4  0.70  57.20s  0.83  2.54e4  15.10s  0.82  2.12e4  198.20s  0.82  4.02e4  27.44s 
Running DUTI with multiple local agents has a significant higher running time. The major bottleneck of DUTI is the inverse operation over by pairwise inner product matrix of the data instances hosted on each local agent. The computational complexity of matrix inverse is O(), which can be prohibitively expensive to the local devices with limited computing capability. Furthermore, since each local device runs DUTI independently with their own data, we can’t launch the model training step until the last device finishes running DUTI and provides the crafted training data, while the other devices idles. It leads to extra time cost. Compared to DUTI, the proposed method costs less than of the running time according to Table.2, Table.3 and Table.4. In the regression scenario, conducting CoMT on 5 local agents requires on average 1500 iterations before converging to stable estimation of and . The classification scenario requires on average 2000 iterations before reaching convergence. The convergence behavior is similar with that reported in [13, 21]. We can expect faster convergence with smarter settings of learning rates to estimate the optimal incremental update, e.g., nesterov accelerated gradient.
In addition, we evaluate scalability of the proposed CoMT method by increasing the number of all the synthetic data instances from 50,000, 100,000 to 500,000. We fix all the rest settings as described in Section.5.1. Since scalability is the main focus, we only record the averaged running time derived at the three levels of data volume. In the regression scenario, the averaged running time derived at each level of data volume are 36.23s, 79.45s and 384.32s. In the classification scenario, the averaged running time derived corresponding to each data volume are 65.60s, 141.88s and 684.33s respectively. The results demonstrate approximately linear increasing of computational cost. It verifies the computational efficiency of the consensus optimization in Algorithm.1.
5.3 Results on realworld data sets
Two realworld data sets, CPUSMALL and IJCNN, are employed to evaluate practical usability of CoMT for regularized Logistic Regression and Ridge Regression respectively. We fix globally as 1.5. is set as for regression and for classification respectively. We repeat the sampling process for 20 times and calculate the mean and variance of the derived Rsquared scores and AUC scores to measure the performance of CoMT. On the regression data set CPUSMALL, CoMT selects only of the training instances hosted by all local agents to achieve its highest AUC score of 0.71. In contrast, the averaged Rsquared score of TIonly, DUTI and rLR are 0.38, 0.56 and 0.58. On the classification set IJCNN, AUC score of CoMT achieves 0.72 in the scenario where only the features are corrupted. With the sames setting, AUC scores of TIonly, DUTI and RloR are 0.70, 0.68 and 0.69. In the scenario where only label flipping noise is witnessed, CoMT obtains an AUC score of 0.73. Compared to CoMT, AUC scores of all the other baseline methods are less than 0.70. In the classification scenario, CoMT achieves the best AUC with of the whole training data. After reaching the highest Rsquared and AUC scores, it is interesting to observe the performances of the learnt regression and classification model do not increase consistently or even decline with more training data instances selected and crafted. This observation confirms empirically the existence of the optimal subset for the collaborative teaching activity.
Figure (a)a shows that the objective function value of the proposed CoMT method converges after 2500 iterations of the collaborative optimization. In this experiment, it costs 45.48s on the given computing platform. Similarly, Figure (b)b illustrates the declination of the objective functions values on IJCNN. In the classification scenarios, CoMT requires 1200 iterations before reaching the stable estimation of the model parameter, which cost 195.48s.
6 Conclusion
In this paper, we explore how to produce robust federated model training with systematically corrupted data sets distributed over multiple local agents. To attack this problem, we propose a consensus and privacypreserving optimization process which unifies collaborative machine teaching and model training together. Our main contributions can be concluded in two major aspects. Firstly, tuning of training data and learning with the tuned data are unified together as a joint optimization problem. It helps to better tune buggy training data to make the learnt model consistent with the underlying true correlation between features and labels. Secondly, collaboration between local agents shares information about data tuning, which is used to jointly generate the crafted training data to achieve the teaching goal. Our empirical results on both synthetic and realword data sets confirm the superior performances of the proposed method over the noncollaborative solutions. Our future work studies practical use of the proposed robust federated training framework over more complex machine learning models. More concretely, we plan to extend the teaching paradigm to diverse types of machine learning models, like deep neural nets.
References

[1]
Alfeld, S., Zhu, X., Barford, P.: Data poisoning attacks against autoregressive models. In: AAAI. pp. 1452–1458 (2016)
 [2] Balbach, F.J.: Measuring teachability using variants of the teaching dimension. Theoretical Computer Science 397(13), 94–113 (2008)
 [3] Bhahpji, A.N., Charkraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: arXiv preprint arXiv:1811.12470 (2018)
 [4] Bootkrajang, J., Kaban, A.: Labelnoise robust logistic regression and its applications. In: ECML PKDD 2012. pp. 143–158 (2012)

[5]
Brodley, C., Friedl, M.: Identifying mislabeled training data. Journal of Artificial Intelligence Research
11, 131–167 (1999)  [6] Chen, Y., Singla, A., Mac Aodha, O., Perona, P., Yue, Y.: Understanding the role of adaptivity in machine teaching: The case of version space learners. arXiv preprint arXiv:1802.05190 (2018)
 [7] Doliwa, T., Fan, G., Simon, H.U., Zilles, S.: Recursive teaching dimension, vcdimension and sample compression. JMLR 15(1), 3107–3131 (2014)
 [8] Fan, Y., Tian, F., Qin, T., Li, X.Y., Liu, T.Y.: Learning to teach. In: International Conference on Learning Representations (2018)
 [9] Feng, J., Xu, H., Mannor, S., Shuicheng, Y.: Robust logistic regression and classification. In: Proceedings of the 27th NIPS. pp. 253–261 (2014)
 [10] Gervini, D., Yohai, V.J.: A class of robust and fully efficient regression. Annual of Statistics 30(2), 583–616 (2002)
 [11] Goldman, S.A., Kearns, M.J.: On the complexity of teaching. Journal of Computer and System Sciences 50(1), 20–31 (1995)
 [12] Haug, L., Tschiatschek, S., Singla, A.: Teaching inverse reinforcement learners via features and demonstrations. In: NIPS. pp. 8472–8481 (2018)
 [13] Jaggi, M., Smith, V., Takac, M., Terhorst, J., Krishnan, S., Hofmann, T., Jordan, M.I.: Communicationefficient distributed dual coordinate ascent. In: NIPS. pp. 3068–3076 (2014)
 [14] Konecny, J., McMahan, H., Yu, F.X., Richtarik, P., Suresh, A.T., Bacon, D.: Federated learning: Strategies for improving communication efficiency. In: NIPS Workshop on Private MultiParty Machine Learning (2016)
 [15] Liu, J., Zhu, X., Ohannessian, H.: The teaching dimension of linear learners. In: ICML. pp. 117–126 (2016)
 [16] Liu, W., Dai, B., Humayun, A., Tay, C., Yu, C., Smith, L.B., Rehg, J.M., Song, L.: Iterative machine teaching. In: ICML. pp. 2149–2158 (2017)
 [17] Liu, W., Dai, B., Li, X., Liu, Z., Rehg, J.M., Song, L.: Towards blackbox iterative machine teaching. arXiv preprint arXiv:1710.07742 (2017)
 [18] Ma, Y., Nowak, R., Rigollet, P., Zhang, X., Zhu, X.: Teacher improves learning by selecting a training subset. In: International Conference on Artificial Intelligence and Statistics. pp. 1366–1375 (2018)
 [19] Mei, S., Zhu, X.: Using machine teaching to identify optimal trainingset attacks on machine learners. In: AAAI. pp. 2871–2877 (2015)
 [20] Rahimi, A., Recht, B.: Random features for largescale kernel machines. In: NIPS. pp. 143–158 (2007)
 [21] Smith, V., Forte, S., Ma, C., Takáč, M., Jordan, M.I., Jaggi, M.: Cocoa: A general framework for communicationefficient distributed optimization. JMLR 19, 1–49 (2018)
 [22] Zhang, X., Zhu, X., Wright, S.: Training set debugging using trusted items. In: AAAI (2018)
 [23] Zhu, J.: Machine teaching for bayesian learners in the exponential family. In: NIPS. pp. 1905–1913 (2013)
 [24] Zilles, S., Lange, S., Holte, R., Zinkevich, M.: Models of cooperative teaching and learning. JMLR 12(Feb), 349–384 (2011)
Comments
There are no comments yet.