Boosting the Certified Robustness of L-infinity Distance Nets
Recently, Zhang et al. (2021) developed a new neural network architecture based on ℓ_∞-distance functions, which naturally possesses certified robustness by its construction. Despite the excellent theoretical properties, the model so far can only achieve comparable performance to conventional networks. In this paper, we significantly boost the certified robustness of ℓ_∞-distance nets through a careful analysis of its training process. In particular, we show the ℓ_p-relaxation, a crucial way to overcome the non-smoothness of the model, leads to an unexpected large Lipschitz constant at the early training stage. This makes the optimization insufficient using hinge loss and produces sub-optimal solutions. Given these findings, we propose a simple approach to address the issues above by using a novel objective function that combines a scaled cross-entropy loss with clipped hinge loss. Our experiments show that using the proposed training strategy, the certified accuracy of ℓ_∞-distance net can be dramatically improved from 33.30 to 40.06 other approaches in this area. Such a result clearly demonstrates the effectiveness and potential of ℓ_∞-distance net for certified robustness.
READ FULL TEXT