Modeling Human-like Concept Learning with Bayesian Inference over Natural Language

06/05/2023
by   Kevin Ellis, et al.
0

We model learning of abstract symbolic concepts by performing Bayesian inference over utterances in natural language. For efficient inference, we use a large language model as a proposal distribution. We fit a prior to human data to better model human learners, and evaluate on both generative and logical concepts.

READ FULL TEXT

page 14

page 16

research
05/29/2021

NeuralLog: Natural Language Inference with Joint Neural and Logical Reasoning

Deep learning (DL) based language models achieve high performance on var...
research
02/16/2023

Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a Large Language Model

We use both Bayesian and neural models to dissect a data set of Chinese ...
research
02/20/2020

The Fluidity of Concept Representations in Human Brain Signals

Cognitive theories of human language processing often distinguish betwee...
research
03/31/2023

A Practitioner's Guide to Bayesian Inference in Pharmacometrics using Pumas

This paper provides a comprehensive tutorial for Bayesian practitioners ...
research
05/23/2020

Bayesian workflow for disease transmission modeling in Stan

This tutorial shows how to build, fit, and criticize disease transmissio...
research
01/10/2023

AI Insights into Theoretical Physics and the Swampland Program: A Journey Through the Cosmos with ChatGPT

In this case study, we explore the capabilities and limitations of ChatG...
research
04/12/2018

Solving Bongard Problems with a Visual Language and Pragmatic Reasoning

More than 50 years ago Bongard introduced 100 visual concept learning pr...

Please sign up or login with your details

Forgot password? Click here to reset