-
CTD: Fast, Accurate, and Interpretable Method for Static and Dynamic Tensor Decompositions
How can we find patterns and anomalies in a tensor, or multi-dimensional...
read it
-
Parallel Algorithms for Tensor Train Arithmetic
We present efficient and scalable parallel algorithms for performing mat...
read it
-
Anomaly Detection with Tensor Networks
Originating from condensed matter physics, tensor networks are compact r...
read it
-
Inductive Framework for Multi-Aspect Streaming Tensor Completion with Side Information
Low-rank tensor completion is a well-studied problem and has application...
read it
-
The Ocean Tensor Package
Matrix and tensor operations form the basis of a wide range of fields an...
read it
-
Checkmate: Breaking the Memory Wall with Optimal Tensor Rematerialization
Modern neural networks are increasingly bottlenecked by the limited capa...
read it
Dynamic Tensor Rematerialization
Checkpointing enables training deep learning models under restricted memory budgets by freeing intermediate activations from memory and recomputing them on demand. Previous checkpointing techniques statically plan these recomputations offline and assume static computation graphs. We demonstrate that a simple online algorithm can achieve comparable performance by introducing Dynamic Tensor Rematerialization (DTR), a greedy online algorithm for checkpointing that is extensible and general, is parameterized by eviction policy, and supports dynamic models. We prove that DTR can train an N-layer linear feedforward network on an Ω(√(N)) memory budget with only 𝒪(N) tensor operations. DTR closely matches the performance of optimal static checkpointing in simulated experiments. We incorporate a DTR prototype into PyTorch just by interposing on tensor allocations and operator calls and collecting lightweight metadata on tensors.
READ FULL TEXT
Comments
There are no comments yet.