Bayesian Optimization with Uncertain Preferences over Attributes
We consider black-box global optimization of time-consuming-to-evaluate functions on behalf of a decision-maker whose preferences must be learned. Each feasible design is associated with a time-consuming-to-evaluate vector of attributes, each vector of attributes is assigned a utility by the decision-maker's utility function, and this utility function may be learned approximately using preferences expressed by the decision-maker over pairs of attribute vectors. Past work has used this estimated utility function as if it were error-free within single-objective optimization. However, errors in utility estimation may yield a poor suggested decision. Furthermore, this approach produces a single suggested "best" design, whereas decision-makers often prefer to choose among a menu of designs. We propose a novel Bayesian optimization algorithm that acknowledges the uncertainty in preference estimation and implicitly chooses designs to evaluate using the time-consuming function that are good not just for a single estimated utility function but a range of likely utility functions. Our algorithm then shows a menu of designs and evaluated attributes to the decision-maker who makes a final selection. We demonstrate the value of our algorithm in a variety of numerical experiments.
READ FULL TEXT