Universal Transforming Geometric Network

08/02/2019
by   Jin Li, et al.
0

The recurrent geometric network (RGN), the first end-to-end differentiable neural architecture for protein structure prediction, is a competitive alternative to existing models. However, the RGN's use of recurrent neural networks (RNNs) as internal representations results in long training time and unstable gradients. And because of its sequential nature, it is less effective at learning global dependencies among amino acids than existing transformer architectures. We propose the Universal Transforming Geometric Network (UTGN), an end-to-end differentiable model that uses the encoder portion of the Universal Transformer architecture as an alternative for internal representations. Our experiments show that compared to RGN, UTGN achieve a 1.7 improvement on the free modeling portion and a 0.7 improvement on the template based modeling of the CASP12 competition.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2018

Universal Transformers

Self-attentive feed-forward sequence models have been shown to achieve i...
research
04/30/2019

Very Deep Self-Attention Networks for End-to-End Speech Recognition

Recently, end-to-end sequence-to-sequence models for speech recognition ...
research
06/12/2018

Quaternion Recurrent Neural Networks

Recurrent neural networks (RNNs) are powerful architectures to model seq...
research
04/25/2016

Protein Secondary Structure Prediction Using Cascaded Convolutional and Recurrent Neural Networks

Protein secondary structure prediction is an important problem in bioinf...
research
12/02/2018

End-to-end Learning of Convolutional Neural Net and Dynamic Programming for Left Ventricle Segmentation

Differentiable programming is able to combine different functions or pro...
research
05/19/2022

Insights on Neural Representations for End-to-End Speech Recognition

End-to-end automatic speech recognition (ASR) models aim to learn a gene...
research
04/11/2016

Learning Global Features for Coreference Resolution

There is compelling evidence that coreference prediction would benefit f...

Please sign up or login with your details

Forgot password? Click here to reset