Multi-objective Neural Architecture Search via Non-stationary Policy Gradient
Multi-objective Neural Architecture Search (NAS) aims to discover novel architectures in the presence of multiple conflicting objectives. Recent approaches based on scalarization and evolution have yielded promising results, but the problem of approximating the full Pareto front accurately and efficiently remains challenging. To this end, we explore in this work the novel reinforcement learning based paradigm of non-stationary policy gradient (NPG). NPG utilizes a non-stationary reward function, and encourages a continuous adaptation of the policy to capture the entire Pareto front efficiently. We introduce two novel reward functions with elements from scalarization and evolution. To handle non-stationarity, we propose a new exploration scheme using cosine temperature decay with warm restarts. For fast and accurate architecture evaluation, we introduce a novel pre-trained shared model that we continuously fine-tune throughout training. Our extensive experimental study on CIFAR-10, CIFAR-100, and ImageNet shows that our framework can uncover a representative Pareto front at fast speeds, while achieving superior predictive performance than other multi-objective NAS methods, and many state-of-the-art NAS methods at similar network sizes. Our work demonstrates the potential of NPG as a simple, fast, and effective paradigm for multi-objective NAS.
READ FULL TEXT