Document Image Cleaning using Budget-Aware Black-Box Approximation
Recent work has shown that by approximating the behaviour of a non-differentiable black-box function using a neural network, the black-box can be integrated into a differentiable training pipeline for end-to-end training. This methodology is termed "differentiable bypass,” and a successful application of this method involves training a document preprocessor to improve the performance of a black-box OCR engine. However, a good approximation of an OCR engine requires querying it for all samples throughout the training process, which can be computationally and financially expensive. Several zeroth-order optimization (ZO) algorithms have been proposed in black-box attack literature to find adversarial examples for a black-box model by computing its gradient in a query-efficient manner. However, the query complexity and convergence rate of such algorithms makes them infeasible for our problem. In this work, we propose two sample selection algorithms to train an OCR preprocessor with less than 10 queries, resulting in more than 60 without significant loss of accuracy. We also show an improvement of 4 word-level accuracy of a commercial OCR engine with only 2.5 queries and a 32x reduction in monetary cost. Further, we propose a simple ranking technique to prune 30 without affecting the system's performance.
READ FULL TEXT