Hierarchical RNNs-Based Transformers MADDPG for Mixed Cooperative-Competitive Environments

05/11/2021
by   Xiaolong Wei, et al.
0

At present, attention mechanism has been widely applied to the fields of deep learning models. Structural models that based on attention mechanism can not only record the relationships between features position, but also can measure the importance of different features based on their weights. By establishing dynamically weighted parameters for choosing relevant and irrelevant features, the key information can be strengthened, and the irrelevant information can be weakened. Therefore, the efficiency of deep learning algorithms can be significantly elevated and improved. Although transformers have been performed very well in many fields including reinforcement learning, there are still many problems and applications can be solved and made with transformers within this area. MARL (known as Multi-Agent Reinforcement Learning) can be recognized as a set of independent agents trying to adapt and learn through their way to reach the goal. In order to emphasize the relationship between each MDP decision in a certain time period, we applied the hierarchical coding method and validated the effectiveness of this method. This paper proposed a hierarchical transformers MADDPG based on RNN which we call it Hierarchical RNNs-Based Transformers MADDPG(HRTMADDPG). It consists of a lower level encoder based on RNNs that encodes multiple step sizes in each time sequence, and it also consists of an upper sequence level encoder based on transformer for learning the correlations between multiple sequences so that we can capture the causal relationship between sub-time sequences and make HRTMADDPG more efficient.

READ FULL TEXT
research
02/27/2021

Transformers with Competitive Ensembles of Independent Mechanisms

An important development in deep learning from the earliest MLPs has bee...
research
10/25/2020

Attention is All You Need in Speech Separation

Recurrent Neural Networks (RNNs) have long been the dominant architectur...
research
06/14/2022

Transformers are Meta-Reinforcement Learners

The transformer architecture and variants presented remarkable success a...
research
06/30/2022

Deep Reinforcement Learning with Swin Transformer

Transformers are neural network models that utilize multiple layers of s...
research
04/23/2020

Constructing Complexity-efficient Features in XCS with Tree-based Rule Conditions

A major goal of machine learning is to create techniques that abstract a...
research
03/25/2022

Unsupervised Learning of Temporal Abstractions with Slot-based Transformers

The discovery of reusable sub-routines simplifies decision-making and pl...
research
08/25/2017

Hierarchical Multi-scale Attention Networks for Action Recognition

Recurrent Neural Networks (RNNs) have been widely used in natural langua...

Please sign up or login with your details

Forgot password? Click here to reset