-
Joint Status Sampling and Updating for Minimizing Age of Information in the Internet of Things
The effective operation of time-critical Internet of things (IoT) applic...
read it
-
On the Role of Age-of-Information in Internet of Things
The success of many Internet of Things (IoT) applications relies on the ...
read it
-
Lifelong Learning for Minimizing Age of Information in Internet of Things Networks
In this paper, a lifelong learning problem is studied for an Internet of...
read it
-
Intelligent User Association for Symbiotic Radio Networks using Deep Reinforcement Learning
In this paper, we are interested in symbiotic radio networks, in which a...
read it
-
Edge Learning with Unmanned Ground Vehicle: Joint Path, Energy and Sample Size Planning
Edge learning (EL), which uses edge computing as a platform to execute m...
read it
-
Multi-Agent Deep Stochastic Policy Gradient for Event Based Dynamic Spectrum Access
We consider the dynamic spectrum access (DSA) problem where K Internet o...
read it
-
Context-Aware Wireless Connectivity and Processing Unit Optimization for IoT Networks
A novel approach is presented in this work for context-aware connectivit...
read it
Reinforcement Learning for Minimizing Age of Information in Real-time Internet of Things Systems with Realistic Physical Dynamics
In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In the considered model, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamics of the physical process vary over time, each device must find an optimal sampling frequency to sample the real-time dynamics of the physical system and send sampled information to a base station (BS). Due to limited wireless resources, the BS can only select a subset of devices to transmit their sampled information. Meanwhile, changing the sampling frequency will also impact the energy used by each device for sampling and information transmission. Thus, it is necessary to jointly optimize the sampling policy of each device and the device selection scheme of the BS so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI cost and energy consumption. To solve this problem, a distributed reinforcement learning approach is proposed to optimize the sampling policy. The proposed learning method enables the IoT devices to find the optimal sampling policy using their local observations. Given the sampling policy, the device selection scheme can be optimized so as to minimize the weighted sum of AoI and energy consumption of all devices. Simulations with real data of PM 2.5 pollution show that the proposed algorithm can reduce the sum of AoI by up to 17.8 total energy consumption by up to 13.2 deep Q network method and a uniform sampling policy.
READ FULL TEXT
Comments
There are no comments yet.