Multi-Objective Reinforced Evolution in Mobile Neural Architecture Search

01/04/2019 ∙ by Xiangxiang Chu, et al. ∙ 0

Fabricating neural models for a wide range of mobile devices demands for specific design of networks due to highly constrained resources. Both evolution algorithms (EA) and reinforced learning methods (RL) have been introduced to address Neural Architecture Search, distinct efforts to integrate both categories have also been proposed. However, these combinations usually concentrate on a single objective such as error rate of image classification. They also fail to harness the very benefits from both sides. In this paper, we present a new multi-objective oriented algorithm called MoreMNAS (Multi-Objective Reinforced Evolution in Mobile Neural Architecture Search) by leveraging good virtues from both EA and RL. In particular, we incorporate a variant of multi-objective genetic algorithm NSGA-II, in which the search space is composed of various cells so that crossovers and mutations can be performed at the cell level. Moreover, reinforced control is mixed with random process to regulate arbitrary mutation, maintaining a delicate balance between exploration and exploitation. Therefore, not only does our method prevent the searched models from degrading during the evolution process, but it also makes better use of learned knowledge. Our preliminary experiments conducted in Super Resolution domain (SR) deliver rivalling models compared to some state-of-the-art methods with much less FLOPS. More results will be disclosed very soon



There are no comments yet.


page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.