Gibbs Sampling with People

08/06/2020
by   Peter M. C. Harrison, et al.
4

A core problem in cognitive science and machine learning is to understand how humans derive semantic representations from perceptual objects, such as color from an apple, pleasantness from a musical chord, or trustworthiness from a face. Markov Chain Monte Carlo with People (MCMCP) is a prominent method for studying such representations, in which participants are presented with binary choice trials constructed such that the decisions follow a Markov Chain Monte Carlo acceptance rule. However, MCMCP's binary choice paradigm generates relatively little information per trial, and its local proposal function makes it slow to explore the parameter space and find the modes of the distribution. Here we therefore generalize MCMCP to a continuous-sampling paradigm, where in each iteration the participant uses a slider to continuously manipulate a single stimulus dimension to optimize a given criterion such as 'pleasantness'. We formulate both methods from a utility-theory perspective, and show that the new method can be interpreted as 'Gibbs Sampling with People' (GSP). Further, we introduce an aggregation parameter to the transition step, and show that this parameter can be manipulated to flexibly shift between Gibbs sampling and deterministic optimization. In an initial study, we show GSP clearly outperforming MCMCP; we then show that GSP provides novel and interpretable results in three other domains, namely musical chords, vocal emotions, and faces. We validate these results through large-scale perceptual rating experiments. The final experiments combine GSP with a state-of-the-art image synthesis network (StyleGAN) and a recent network interpretability technique (GANSpace), enabling GSP to efficiently explore high-dimensional perceptual spaces, and demonstrating how GSP can be a powerful tool for jointly characterizing semantic representations in humans and machines.

READ FULL TEXT

page 8

page 9

page 23

page 28

page 29

page 32

page 33

page 34

research
11/17/2017

Techniques for proving Asynchronous Convergence results for Markov Chain Monte Carlo methods

Markov Chain Monte Carlo (MCMC) methods such as Gibbs sampling are findi...
research
01/20/2022

Metropolis Augmented Hamiltonian Monte Carlo

Hamiltonian Monte Carlo (HMC) is a powerful Markov Chain Monte Carlo (MC...
research
11/30/2018

Markov chain Monte Carlo Methods For Lattice Gaussian Sampling:Convergence Analysis and Enhancement

Sampling from lattice Gaussian distribution has emerged as an important ...
research
12/22/2021

Dimension-independent Markov chain Monte Carlo on the sphere

We consider Bayesian analysis on high-dimensional spheres with angular c...
research
02/20/2020

A table of short-period Tausworthe generators for Markov chain quasi-Monte Carlo

We consider the problem of estimating expectations by using Markov chain...
research
05/10/2019

Exploration of Gibbs-Laguerre tessellations for three-dimensional stochastic modeling

Random tessellations are well suited for the probabilistic modeling of t...
research
08/22/2019

Gibbs sampling for game-theoretic modeling of private network upgrades with distributed generation

Renewable energy is increasingly being curtailed, due to oversupply or n...

Please sign up or login with your details

Forgot password? Click here to reset