Ensemble Mask Networks

09/12/2023
by   Jonny Luntzel, et al.
0

Can an ℝ^n→ℝ^n feedforward network learn matrix-vector multiplication? This study introduces two mechanisms - flexible masking to take matrix inputs, and a unique network pruning to respect the mask's dependency structure. Networks can approximate fixed operations such as matrix-vector multiplication ϕ(A,x) → Ax, motivating the mechanisms introduced with applications towards litmus-testing dependencies or interaction order in graph-based models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2021

Fast ultrametric matrix-vector multiplication

We study the properties of ultrametric matrices aiming to design methods...
research
05/27/2023

Pruning at Initialization – A Sketching Perspective

The lottery ticket hypothesis (LTH) has increased attention to pruning n...
research
09/26/2017

PMV: Pre-partitioned Generalized Matrix-Vector Multiplication for Scalable Graph Mining

How can we analyze enormous networks including the Web and social networ...
research
10/30/2019

Effect of Mixed Precision Computing on H-Matrix Vector Multiplication in BEM Analysis

Hierarchical Matrix (H-matrix) is an approximation technique which split...
research
10/22/2021

Multiplication-Avoiding Variant of Power Iteration with Applications

Power iteration is a fundamental algorithm in data analysis. It extracts...
research
11/27/2022

Dynamic Kernel Sparsifiers

A geometric graph associated with a set of points P= {x_1, x_2, ⋯, x_n }...
research
02/22/2022

Distilled Neural Networks for Efficient Learning to Rank

Recent studies in Learning to Rank have shown the possibility to effecti...

Please sign up or login with your details

Forgot password? Click here to reset