A Confidence-Calibrated MOBA Game Winner Predictor

06/28/2020
by   Dong-Hee Kim, et al.
0

In this paper, we propose a confidence-calibration method for predicting the winner of a famous multiplayer online battle arena (MOBA) game, League of Legends. In MOBA games, the dataset may contain a large amount of input-dependent noise; not all of such noise is observable. Hence, it is desirable to attempt a confidence-calibrated prediction. Unfortunately, most existing confidence calibration methods are pertaining to image and document classification tasks where consideration on uncertainty is not crucial. In this paper, we propose a novel calibration method that takes data uncertainty into consideration. The proposed method achieves an outstanding expected calibration error (ECE) (0.57 a conventional temperature scaling method of which ECE value is 1.11

READ FULL TEXT
research
08/30/2019

Bin-wise Temperature Scaling (BTS): Improvement in Confidence Calibration Performance through Simple Scaling Techniques

The prediction reliability of neural networks is important in many appli...
research
03/17/2022

Confidence Calibration for Intent Detection via Hyperspherical Space and Rebalanced Accuracy-Uncertainty Loss

Data-driven methods have achieved notable performance on intent detectio...
research
03/14/2022

On the Calibration of Pre-trained Language Models using Mixup Guided by Area Under the Margin and Saliency

A well-calibrated neural model produces confidence (probability outputs)...
research
02/13/2023

Bag of Tricks for In-Distribution Calibration of Pretrained Transformers

While pre-trained language models (PLMs) have become a de-facto standard...
research
01/02/2021

Uncertainty-sensitive Activity Recognition: a Reliability Benchmark and the CARING Models

Beyond assigning the correct class, an activity recognition model should...
research
10/25/2022

Confidence-Calibrated Face and Kinship Verification

In this paper, we investigate the problem of predictive confidence in fa...
research
04/10/2022

Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates

Driver observation models are rarely deployed under perfect conditions. ...

Please sign up or login with your details

Forgot password? Click here to reset