Programming by Rewards

by   Nagarajan Natarajan, et al.

We formalize and study “programming by rewards” (PBR), a new approach for specifying and synthesizing subroutines for optimizing some quantitative metric such as performance, resource utilization, or correctness over a benchmark. A PBR specification consists of (1) input features x, and (2) a reward function r, modeled as a black-box component (which we can only run), that assigns a reward for each execution. The goal of the synthesizer is to synthesize a "decision function" f which transforms the features to a decision value for the black-box component so as to maximize the expected reward E[r ∘ f (x)] for executing decisions f(x) for various values of x. We consider a space of decision functions in a DSL of loop-free if-then-else programs, which can branch on linear functions of the input features in a tree-structure and compute a linear function of the inputs in the leaves of the tree. We find that this DSL captures decision functions that are manually written in practice by programmers. Our technical contribution is the use of continuous-optimization techniques to perform synthesis of such decision functions as if-then-else programs. We also show that the framework is theoretically-founded —in cases when the rewards satisfy nice properties, the synthesized code is optimal in a precise sense. We have leveraged PBR to synthesize non-trivial decision functions related to search and ranking heuristics in the PROSE codebase (an industrial strength program synthesis framework) and achieve competitive results to manually written procedures over multiple man years of tuning. We present empirical evaluation against other baseline techniques over real-world case studies (including PROSE) as well on simple synthetic benchmarks.



There are no comments yet.


page 39


Modeling Black-Box Components with Probabilistic Synthesis

This paper is concerned with synthesizing programs based on black-box or...

Neural Program Synthesis with Priority Queue Training

We consider the task of program synthesis in the presence of a reward fu...

Preprocessing Reward Functions for Interpretability

In many real-world applications, the reward function is too complex to b...

Synthesizing Pareto-Optimal Interpretations for Black-Box Models

We present a new multi-objective optimization approach for synthesizing ...

Extracting Incentives from Black-Box Decisions

An algorithmic decision-maker incentivizes people to act in certain ways...

Online Decisioning Meta-Heuristic Framework for Large Scale Black-Box Optimization

Out of practical concerns and with the expectation to achieve high overa...

Closing Bell: Boxing black box simulations in the resource theory of contextuality

This chapter contains an exposition of the sheaf-theoretic framework for...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.