Privacy-preserving Incremental ADMM for Decentralized Consensus Optimization
The alternating direction method of multipliers (ADMM) has been recently recognized as a promising optimizer for large-scale machine learning models. However, there are very few results studying ADMM from the aspect of communication costs, especially jointly with privacy preservation, which are critical for distributed learning. We investigate the communication efficiency and privacy-preservation of ADMM by solving the consensus optimization problem over decentralized networks. Since walk algorithms can reduce communication load, we first propose incremental ADMM (I-ADMM) based on the walk algorithm, the updating order of which follows a Hamiltonian cycle instead. However, I-ADMM cannot guarantee the privacy for agents against external eavesdroppers even if the randomized initialization is applied. To protect privacy for agents, we then propose two privacy-preserving incremental ADMM algorithms, i.e., PI-ADMM1 and PI-ADMM2, where perturbation over step sizes and primal variables is adopted, respectively. Through theoretical analyses, we prove the convergence and privacy preservation for PI-ADMM1, which are further supported by numerical experiments. Besides, simulations demonstrate that the proposed PI-ADMM1 and PI-ADMM2 algorithms are communication efficient compared with state-of-the-art methods.
READ FULL TEXT