Research on Patch Attentive Neural Process

01/29/2022
by   Xiaohan Yu, et al.
0

Attentive Neural Process (ANP) improves the fitting ability of Neural Process (NP) and improves its prediction accuracy, but the higher time complexity of the model imposes a limitation on the length of the input sequence. Inspired by models such as Vision Transformer (ViT) and Masked Auto-Encoder (MAE), we propose Patch Attentive Neural Process (PANP) using image patches as input and improve the structure of deterministic paths based on ANP, which allows the model to extract image features more accurately and efficiently reconstruction.

READ FULL TEXT
research
09/19/2022

Attentive Symmetric Autoencoder for Brain MRI Segmentation

Self-supervised learning methods based on image patch reconstruction hav...
research
06/09/2018

Sparse Over-complete Patch Matching

Image patch matching, which is the process of identifying corresponding ...
research
07/29/2021

PPT Fusion: Pyramid Patch Transformerfor a Case Study in Image Fusion

The Transformer architecture has achieved rapiddevelopment in recent yea...
research
10/17/2019

Recurrent Attentive Neural Process for Sequential Data

Neural processes (NPs) learn stochastic processes and predict the distri...
research
12/11/2022

Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition

Facial Expression Recognition (FER) in the wild is an extremely challeng...
research
02/10/2021

Last Query Transformer RNN for knowledge tracing

This paper presents an efficient model to predict a student's answer cor...
research
04/03/2021

Deepfake Detection Scheme Based on Vision Transformer and Distillation

Deepfake is the manipulated video made with a generative deep learning t...

Please sign up or login with your details

Forgot password? Click here to reset