Michael McCourt

is this you? claim profile

0 followers

  • Bayesian Optimization over Sets

    We propose a Bayesian optimization method over sets, to minimize a black-box function that can take a set as single input. Because set inputs are permutation-invariant and variable-length, traditional Gaussian process-based Bayesian optimization strategies which assume vector inputs can fall short. To address this, we develop a Bayesian optimization method with set kernel that is used to build surrogate functions. This kernel accumulates similarity over set elements to enforce permutation-invariance and permit sets of variable size, but this comes at a greater computational cost. To reduce this burden, we propose a more efficient probabilistic approximation which we prove is still positive definite and is an unbiased estimator of the true set kernel. Finally, we present several numerical experiments which demonstrate that our method outperforms other methods in various applications.

    05/23/2019 ∙ by Jungtaek Kim, et al. ∙ 10 share

    read it

  • Robust Bayesian Optimization with Student-t Likelihood

    Bayesian optimization has recently attracted the attention of the automatic machine learning community for its excellent results in hyperparameter tuning. BO is characterized by the sample efficiency with which it can optimize expensive black-box functions. The efficiency is achieved in a similar fashion to the learning to learn methods: surrogate models (typically in the form of Gaussian processes) learn the target function and perform intelligent sampling. This surrogate model can be applied even in the presence of noise; however, as with most regression methods, it is very sensitive to outlier data. This can result in erroneous predictions and, in the case of BO, biased and inefficient exploration. In this work, we present a GP model that is robust to outliers which uses a Student-t likelihood to segregate outliers and robustly conduct Bayesian optimization. We present numerical results evaluating the proposed method in both artificial functions and real problems.

    07/18/2017 ∙ by Ruben Martinez-Cantin, et al. ∙ 0 share

    read it

  • A Stratified Analysis of Bayesian Optimization Methods

    Empirical analysis serves as an important complement to theoretical analysis for studying practical Bayesian optimization. Often empirical insights expose strengths and weaknesses inaccessible to theoretical analysis. We define two metrics for comparing the performance of Bayesian optimization methods and propose a ranking mechanism for summarizing performance within various genres or strata of test functions. These test functions serve to mimic the complexity of hyperparameter optimization problems, the most prominent application of Bayesian optimization, but with a closed form which allows for rapid evaluation and more predictable behavior. This offers a flexible and efficient way to investigate functions with specific properties of interest, such as oscillatory behavior or an optimum on the domain boundary.

    03/31/2016 ∙ by Ian Dewancker, et al. ∙ 0 share

    read it

  • Practical Bayesian optimization in the presence of outliers

    Inference in the presence of outliers is an important field of research as outliers are ubiquitous and may arise across a variety of problems and domains. Bayesian optimization is method that heavily relies on probabilistic inference. This allows outstanding sample efficiency because the probabilistic machinery provides a memory of the whole optimization process. However, that virtue becomes a disadvantage when the memory is populated with outliers, inducing bias in the estimation. In this paper, we present an empirical evaluation of Bayesian optimization methods in the presence of outliers. The empirical evidence shows that Bayesian optimization with robust regression often produces suboptimal results. We then propose a new algorithm which combines robust regression (a Gaussian process with Student-t likelihood) with outlier diagnostics to classify data points as outliers or inliers. By using an scheduler for the classification of outliers, our method is more efficient and has better convergence over the standard robust regression. Furthermore, we show that even in controlled situations with no expected outliers, our method is able to produce better results.

    12/12/2017 ∙ by Ruben Martinez-Cantin, et al. ∙ 0 share

    read it

  • Active Preference Learning for Personalized Portfolio Construction

    In financial asset management, choosing a portfolio requires balancing returns, risk, exposure, liquidity, volatility and other factors. These concerns are difficult to compare explicitly, with many asset managers using an intuitive or implicit sense of their interaction. We propose a mechanism for learning someone's sense of distinctness between portfolios with the goal of being able to identify portfolios which are predicted to perform well but are distinct from the perspective of the user. This identification occurs, e.g., in the context of Bayesian optimization of a backtested performance metric. Numerical experiments are presented which show the impact of personal beliefs in informing the development of a diverse and high-performing portfolio.

    08/24/2017 ∙ by Kevin Tee, et al. ∙ 0 share

    read it

  • Sequential Preference-Based Optimization

    Many real-world engineering problems rely on human preferences to guide their design and optimization. We present PrefOpt, an open source package to simplify sequential optimization tasks that incorporate human preference feedback. Our approach extends an existing latent variable model for binary preferences to allow for observations of equivalent preference from users.

    01/09/2018 ∙ by Ian Dewancker, et al. ∙ 0 share

    read it

  • Orchestrate: Infrastructure for Enabling Parallelism during Hyperparameter Optimization

    Two key factors dominate the development of effective production grade machine learning models. First, it requires a local software implementation and iteration process. Second, it requires distributed infrastructure to efficiently conduct training and hyperparameter optimization. While modern machine learning frameworks are very effective at the former, practitioners are often left building ad hoc frameworks for the latter. We present SigOpt Orchestrate, a library for such simultaneous training in a cloud environment. We describe the motivating factors and resulting design of this library, feedback from initial testing, and future goals.

    12/19/2018 ∙ by Alexandra Johnson, et al. ∙ 0 share

    read it

  • A Nonstationary Designer Space-Time Kernel

    In spatial statistics, kriging models are often designed using a stationary covariance structure; this translation-invariance produces models which have numerous favorable properties. This assumption can be limiting, though, in circumstances where the dynamics of the model have a fundamental asymmetry, such as in modeling phenomena that evolve over time from a fixed initial profile. We propose a new nonstationary kernel which is only defined over the half-line to incorporate time more naturally in the modeling process.

    12/01/2018 ∙ by Michael McCourt, et al. ∙ 0 share

    read it

  • Sampling Humans for Optimizing Preferences in Coloring Artwork

    Many circumstances of practical importance have performance or success metrics which exist implicitly---in the eye of the beholder, so to speak. Tuning aspects of such problems requires working without defined metrics and only considering pairwise comparisons or rankings. In this paper, we review an existing Bayesian optimization strategy for determining most-preferred outcomes, and identify an adaptation to allow it to handle ties. We then discuss some of the issues we have encountered when humans use this optimization strategy to optimize coloring a piece of abstract artwork. We hope that, by participating in this workshop, we can learn how other researchers encounter difficulties unique to working with humans in the loop.

    06/10/2019 ∙ by Michael McCourt, et al. ∙ 0 share

    read it