Reachable Sets of Classifiers Regression Models: (Non-)Robustness Analysis and Robust Training
Neural networks achieve outstanding accuracy in classification and regression tasks. However, understanding their behavior still remains an open challenge that requires questions to be addressed on the robustness, explainability and reliability of predictions. We answer these questions by computing reachable sets of neural networks, i.e. sets of outputs resulting from continuous sets of inputs. We provide two efficient approaches that lead to over- and under-approximations of the reachable set. This principle is highly versatile, as we show. First, we analyze and enhance the robustness properties of both classifiers and regression models. This is in contrast to existing works, which only handle classification. Specifically, we verify (non-)robustness, propose a robust training procedure, and show that our approach outperforms adversarial attacks as well as state-of-the-art methods of verifying classifiers for non-norm bound perturbations. We also provide a technique of distinguishing between reliable and non-reliable predictions for unlabeled inputs, quantify the influence of each feature on a prediction, and compute a feature ranking.
READ FULL TEXT