A reinforcement learning approach to improve communication performance and energy utilization in fog-based IoT
Recent research has shown the potential of using available mobile fog devices (such as smartphones, drones, domestic and industrial robots) as relays to minimize communication outages between sensors and destination devices, where localized Internet-of-Things services (e.g., manufacturing process control, health and security monitoring) are delivered. However, these mobile relays deplete energy when they move and transmit to distant destinations. As such, power-control mechanisms and intelligent mobility of the relay devices are critical in improving communication performance and energy utilization. In this paper, we propose a Q-learning-based decentralized approach where each mobile fog relay agent (MFRA) is controlled by an autonomous agent which uses reinforcement learning to simultaneously improve communication performance and energy utilization. Each autonomous agent learns based on the feedback from the destination and its own energy levels whether to remain active and forward the message, or become passive for that transmission phase. We evaluate the approach by comparing with the centralized approach, and observe that with lesser number of MFRAs, our approach is able to ensure reliable delivery of data and reduce overall energy cost by 56.76% – 88.03%.
READ FULL TEXT