Multi-Task Variational Information Bottleneck
In this paper we propose a multi-task deep learning model called multi-task variational information bottleneck (in short MTVIB). The structure of the variational information bottleneck (VIB) is used to obtain the latent representation of the input data; the task-dependent uncertainties are used to learn the relative weights of task loss functions; and the multi-task learning can be formulated as a constrained multi-objective optimization problem. Our model can enhance the latent representations and consider the trade-offs among different learning tasks. It is examined with publicly available datasets under different adversarial attacks. The overall classification performance of our model is promising. It can achieve comparable classification accuracies as the benchmarked models, and has shown a better robustness against adversarial attacks compared with other multi-task deep learning models.
READ FULL TEXT