Nearly sharp structured sketching for constrained optimization
In this work, we study a tensor-structured random sketching matrix to project a large-scale convex optimization problem to a much lower-dimensional counterpart, which leads to huge memory and computation savings. We show that while maintaining the prediction error between a random estimator and the true solution with high probability, the dimension of the projected problem obtains optimal dependence in terms of the geometry of the constraint set. Moreover, the tensor structure and sparsity pattern of the structured random matrix yields extra computational advantage. Our analysis is based on probability chaining theory, which allows us to obtain an almost sharp estimate for the sketching dimension of convex optimization problems. Consequences of our main result are demonstrated in a few concrete examples, including unconstrained linear regressions and sparse recovery problems.
READ FULL TEXT