Deep learning languages: a key fundamental shift from probabilities to weights?

08/02/2019
by   François Coste, et al.
0

Recent successes in language modeling, notably with deep learning methods, coincide with a shift from probabilistic to weighted representations. We raise here the question of the importance of this evolution, in the light of the practical limitations of a classical and simple probabilistic modeling approach for the classification of protein sequences and in relation to the need for principled methods to learn non-probabilistic models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2021

Modeling Protein Using Large-scale Pretrain Language Model

Protein is linked to almost every life process. Therefore, analyzing the...
research
07/16/2020

Deep Learning in Protein Structural Modeling and Design

Deep learning is catalyzing a scientific revolution fueled by big data, ...
research
12/01/2020

Profile Prediction: An Alignment-Based Pre-Training Task for Protein Sequence Models

For protein sequence datasets, unlabeled data has greatly outpaced label...
research
09/16/2022

Deep learning for reconstructing protein structures from cryo-EM density maps: recent advances and future directions

Cryo-Electron Microscopy (cryo-EM) has emerged as a key technology to de...
research
12/03/2019

Unsupervised Inflection Generation Using Neural Language Modeling

The use of Deep Neural Network architectures for Language Modeling has r...
research
08/09/2019

Probabilistic Models with Deep Neural Networks

Recent advances in statistical inference have significantly expanded the...
research
12/10/2021

Recalibration of Predictive Models as Approximate Probabilistic Updates

The output of predictive models is routinely recalibrated by reconciling...

Please sign up or login with your details

Forgot password? Click here to reset