We present a novel distributed computing framework that is robust to slo...
In this work, we propose methods for speeding up linear regression
distr...
We generalize the leverage score sampling sketch for ℓ_2-subspace
embedd...
We develop an analytical framework to characterize the set of optimal Re...
Threshold activation functions are highly preferable in neural networks ...
For many machine learning applications, a common input representation is...
Federated learning (FL) is a decentralized model for training data
distr...
The practice of deep learning has shown that neural networks generalize
...
Unrolled neural networks have recently achieved state-of-the-art acceler...
A cumbersome operation in many scientific fields, is inverting large
ful...
Vision transformers using self-attention or its proposed alternatives ha...
Unrolled neural networks have enabled state-of-the-art reconstruction
pe...
We study the problem of estimating the value of a known smooth function ...
We consider distributed optimization methods for problems where forming ...
We consider the problem of quantizing a linear model learned from
measur...
We develop fast algorithms and robust software for convex optimization o...
In this work, we propose a method for speeding up linear regression
dist...
The COVID-19 pandemic has been a scourge upon humanity, claiming the liv...
The Covid-19 pandemic has been a scourge upon humanity, claiming the liv...
Despite several attempts, the fundamental mechanisms behind the success ...
We study non-convex subgradient flows for training two-layer ReLU neural...
Training deep neural networks is a well-known highly non-convex problem....
Understanding the fundamental mechanism behind the success of deep neura...
We introduce an error resilient distributed computing method based on an...
In second-order optimization, a potential bottleneck can be computing th...
Generative Adversarial Networks (GANs) are commonly used for modeling co...
We propose a randomized algorithm with quadratic convergence rate for co...
Neural networks (NNs) have been extremely successful across many tasks i...
We consider least-squares problems with quadratic regularization and pro...
In this work, we consider the distributed optimization setting where
inf...
Batch Normalization (BN) is a commonly used technique to accelerate and
...
The training of two-layer neural networks with nonlinear activation func...
We describe the convex semi-infinite dual of the two-layer vector-output...
We propose novel randomized optimization methods for high-dimensional co...
Neural networks have shown tremendous potential for reconstructing
high-...
One of the most common, but at the same time expensive operations in lin...
In distributed second order optimization, a standard strategy is to aver...
We study training of Convolutional Neural Networks (CNNs) with ReLU
acti...
In this work, we consider the deterministic optimization using random
pr...
We are interested in two-layer ReLU neural networks from an optimization...
We propose a new randomized algorithm for solving L2-regularized
least-s...
Multiclass classification problems are most often solved by either train...
A cumbersome operation in numerical analysis and linear algebra,
optimiz...
We develop a convex analytic framework for ReLU neural networks which
el...
Batch Normalization (BatchNorm) is commonly used in Convolutional Neural...
We develop exact representations of two layer neural networks with recti...
We study regularized deep neural networks and introduce an analytic fram...
We provide an exact analysis of a class of randomized algorithms for sol...
We consider distributed optimization problems where forming the Hessian ...
In this work, we study distributed sketching methods for large scale
reg...