
On Deep Set Learning and the Choice of Aggregations
Recently, it has been shown that many functions on sets can be represent...
read it

On the Limitations of Representing Functions on Sets
Recent work on the representation of functions on sets has considered th...
read it

Learning Local Feature Aggregation Functions with Backpropagation
This paper introduces a family of local feature aggregation functions an...
read it

Generalising Recursive Neural Models by Tensor Decomposition
Most machine learning models for structured data encode the structural k...
read it

Expressiveness of Neural Networks Having Width Equal or Below the Input Dimension
The expressiveness of deep neural networks of bounded width has recently...
read it

Improving Sparse Associative Memories by Escaping from Bogus Fixed Points
The GriponBerrou neural network (GBNN) is a recently invented recurrent...
read it

Discriminative structural graph classification
This paper focuses on the discrimination capacity of aggregation functio...
read it
Learning Aggregation Functions
Learning on sets is increasingly gaining attention in the machine learning community, due to its widespread applicability. Typically, representations over sets are computed by using fixed aggregation functions such as sum or maximum. However, recent results showed that universal function representation by sum (or max) decomposition requires either highly discontinuous (and thus poorly learnable) mappings, or a latent dimension equal to the maximum number of elements in the set. To mitigate this problem, we introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality. LAF can approximate several extensively used aggregators (such as average, sum, maximum) as well as more complex functions (e.g. variance and skewness). We report experiments on semisynthetic and real data showing that LAF outperforms stateoftheart sum (max) decomposition architectures such as DeepSets and librarybased architectures like Principal Neighborhood Aggregation.
READ FULL TEXT
Comments
There are no comments yet.