Tensor Decompositions in Recursive NeuralNetworks for Tree-Structured Data

06/18/2020
by   Daniele Castellana, et al.
0

The paper introduces two new aggregation functions to encode structural knowledge from tree-structured data. They leverage the Canonical and Tensor-Train decompositions to yield expressive context aggregation while limiting the number of model parameters. Finally, we define two novel neural recursive models for trees leveraging such aggregation functions, and we test them on two tree classification tasks, showing the advantage of proposed models when tree outdegree increases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2020

Generalising Recursive Neural Models by Tensor Decomposition

Most machine learning models for structured data encode the structural k...
research
06/24/2021

The condition number of many tensor decompositions is invariant under Tucker compression

We characterise the sensitivity of several additive tensor decomposition...
research
05/31/2019

Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models

Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree...
research
11/02/2020

Learning from Non-Binary Constituency Trees via Tensor Decomposition

Processing sentence constituency trees in binarised form is a common and...
research
01/01/2023

Image To Tree with Recursive Prompting

Extracting complex structures from grid-based data is a common key step ...
research
04/27/2021

SuperVoxHenry Tucker-Enhanced and FFT-Accelerated Inductance Extraction for Voxelized Superconducting Structures

This paper introduces SuperVoxHenry, an inductance extraction simulator ...
research
03/18/2019

On Deep Set Learning and the Choice of Aggregations

Recently, it has been shown that many functions on sets can be represent...

Please sign up or login with your details

Forgot password? Click here to reset