Learning to Control Random Boolean Networks: A Deep Reinforcement Learning Approach
In this paper we describe the application of a Deep Reinforcement Learning agent to the problem of control of Gene Regulatory Networks (GRNs). The proposed approach is applied to Random Boolean Networks (RBNs) which have extensively been used as a computational model for GRNs. The ability to control GRNs is central to therapeutic interventions for diseases such as cancer. That is, learning to make such interventions as to direct the GRN from some initial state towards a desired attractor, by allowing at most one intervention per time step. Our agent interacts directly with the environment; being an RBN, without any knowledge of the underlying dynamics, structure or connectivity of the network. We have implemented a Deep Q Network with Double Q Learning that is trained by sampling experiences from the environment using Prioritized Experience Replay. We show that the proposed novel approach develops a policy that successfully learns how to control RBNs significantly larger than previous learning implementations. We also discuss why learning to control an RBN with zero knowledge of its underlying dynamics is important and argue that the agent is encouraged to discover and perform optimal control interventions in regard to cost and number of interventions.
READ FULL TEXT