Observe Locally, Classify Globally: Using GNNs to Identify Sparse Matrix Structure

07/26/2023
by   Khaled Abdelaal, et al.
0

The performance of sparse matrix computation highly depends on the matching of the matrix format with the underlying structure of the data being computed on. Different sparse matrix formats are suitable for different structures of data. Therefore, the first challenge is identifying the matrix structure before the computation to match it with an appropriate data format. The second challenge is to avoid reading the entire dataset before classifying it. This can be done by identifying the matrix structure through samples and their features. Yet, it is possible that global features cannot be determined from a sampling set and must instead be inferred from local features. To address these challenges, we develop a framework that generates sparse matrix structure classifiers using graph convolutional networks. The framework can also be extended to other matrix structures using user-provided generators. The approach achieves 97 matrix shapes.

READ FULL TEXT
research
10/30/2021

Optimizing Sparse Matrix Multiplications for Graph Neural Networks

Graph neural networks (GNNs) are emerging as a powerful technique for mo...
research
03/27/2019

Batched Sparse Matrix Multiplication for Accelerating Graph Convolutional Networks

Graph Convolutional Networks (GCNs) are recently getting much attention ...
research
07/23/2013

A unified sparse matrix data format for efficient general sparse matrix-vector multiply on modern processors with wide SIMD units

Sparse matrix-vector multiplication (spMVM) is the most time-consuming k...
research
10/19/2022

RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations

The training of graph neural networks (GNNs) is extremely time consuming...
research
05/05/2023

Adaptive Graph Convolutional Subspace Clustering

Spectral-type subspace clustering algorithms have shown excellent perfor...
research
09/03/2018

Towards Dynamic Computation Graphs via Sparse Latent Structure

Deep NLP models benefit from underlying structures in the data---e.g., p...

Please sign up or login with your details

Forgot password? Click here to reset