DeepAI AI Chat
Log In Sign Up

Bayesian Optimization in AlphaGo

12/17/2018
by   Yutian Chen, et al.
Google
129

During the development of AlphaGo, its many hyper-parameters were tuned with Bayesian optimization multiple times. This automatic tuning process resulted in substantial improvements in playing strength. For example, prior to the match with Lee Sedol, we tuned the latest AlphaGo agent and this improved its win-rate from 50 in the final match. Of course, since we tuned AlphaGo many times during its development cycle, the compounded contribution was even higher than this percentage. It is our hope that this brief case study will be of interest to Go fans, and also provide Bayesian optimization practitioners with some insights and inspiration.

READ FULL TEXT
12/28/2016

Bayesian Optimization with Shape Constraints

In typical applications of Bayesian optimization, minimal assumptions ar...
01/16/2015

Differentially Private Bayesian Optimization

Bayesian optimization is a powerful tool for fine-tuning the hyper-param...
10/26/2020

Scalable Bayesian Optimization with Sparse Gaussian Process Models

This thesis focuses on Bayesian optimization with the improvements comin...
05/22/2021

AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly

The learning rate (LR) schedule is one of the most important hyper-param...
08/03/2021

Solving Fashion Recommendation – The Farfetch Challenge

Recommendation engines are integral to the modern e-commerce experience,...
02/07/2023

A Bayesian Optimization approach for calibrating large-scale activity-based transport models

The use of Agent-Based and Activity-Based modeling in transportation is ...
12/17/2019

Kalman Filter Tuning with Bayesian Optimization

Many state estimation algorithms must be tuned given the state space pro...

Code Repositories

hyper_parameter_optimization

Tutorial on hyper-parameter example with a toy problem


view repo