-
Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Training a machine learning model is both compute and data-intensive. Mo...
read it
-
Coded Merkle Tree: Solving Data Availability Attacks in Blockchains
In this paper, we propose coded Merkle tree (CMT), a novel hash accumula...
read it
-
Coded State Machine -- Scaling State Machine Execution under Byzantine Faults
We introduce an information-theoretic framework, named Coded State Machi...
read it
-
Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Distributed training of deep nets is an important technique to address s...
read it
-
GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Data parallelism can boost the training speed of convolutional neural ne...
read it
-
PolyShard: Coded Sharding Achieves Linearly Scaling Efficiency and Security Simultaneously
Today's blockchains do not scale in a meaningful sense. As more nodes jo...
read it
-
On the Packet Decoding Delay of Linear Network Coded Wireless Broadcast
We apply linear network coding (LNC) to broadcast a block of data packet...
read it

Mingchao Yu
is this you? claim profile