
Graph Element Networks: adaptive, structured computation and memory
We explore the use of graph neural networks (GNNs) to model spatial proc...
read it

Every Local Minimum is a Global Minimum of an Induced Model
For nonconvex optimization in machine learning, this paper proves that ...
read it

Metalearning curiosity algorithms
We hypothesize that curiosity is a mechanism found by evolution that enc...
read it

Learning to guide task and motion planning using scorespace representation
In this paper, we propose a learning algorithm that speeds up the search...
read it

Online Replanning in Belief Space for Partially Observable Task and Motion Problems
To solve multistep manipulation tasks in the real world, an autonomous ...
read it

GLIB: Exploration via GoalLiteral Babbling for Lifted Operator Learning
We address the problem of efficient exploration for learning lifted oper...
read it

FewShot Bayesian Imitation Learning with Logic over Programs
We describe an expressive class of policies that can be efficiently lear...
read it

Learning to select examples for program synthesis
Program synthesis is a class of regression problems where one seeks a so...
read it

STRIPS Planning in Infinite Domains
Many robotic planning applications involve continuous actions with highl...
read it

Learning to Rank for Synthesizing Planning Heuristics
We investigate learning heuristics for domainspecific planning. Prior w...
read it

Focused ModelLearning and Planning for NonGaussian Continuous StateAction Systems
We introduce a framework for model learning and planning in stochastic d...
read it

BackwardForward Search for Manipulation Planning
In this paper we address planning problems in highdimensional hybrid co...
read it

Objectbased World Modeling in SemiStatic Environments with Dependent DirichletProcess Mixtures
To accomplish tasks in humancentric indoor environments, robots need to...
read it

Generalization in Deep Learning
This paper explains why deep learning can generalize well, despite large...
read it

Bayesian Optimization with Exponential Convergence
This paper presents a Bayesian optimization method with exponential conv...
read it

Learning to Cooperate via Policy Search
Cooperative games are those in which both agents share the same payoff s...
read it

Deliberation Scheduling for TimeCritical Sequential Decision Making
We describe a method for timecritical decision making involving sequent...
read it

Hierarchical Solution of Markov Decision Processes using Macroactions
We investigate the use of temporally abstract actions, or macroactions,...
read it

Learning FiniteState Controllers for Partially Observable Environments
Reactive (memoryless) policies are sufficient in completely observable M...
read it

Solving POMDPs by Searching the Space of Finite Policies
Solving partially observable Markov decision processes (POMDPs) is highl...
read it

The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning
Most reinforcement learning methods operate on propositional representat...
read it

Accelerating EM: An Empirical Study
Many applications require that we learn the parameters of a model from d...
read it

Adaptive Importance Sampling for Estimation in Structured Domains
Sampling is an important tool for estimating large, complex sums and int...
read it

Guiding the search in continuous stateaction spaces by learning an action sampling distribution from offtarget samples
In robotics, it is essential to be able to plan efficiently in highdime...
read it

SamplingBased Methods for Factored Task and Motion Planning
This paper presents a generalpurpose formulation of a large class of di...
read it

STRIPStream: Integrating Symbolic Planners and Blackbox Samplers
Many planning applications involve complex relationships defined on high...
read it

Integrating HumanProvided Information Into Belief State Representation Using Dynamic Factorization
In partially observed environments, it can be useful for a human to prov...
read it

Active model learning and diverse action sampling for task and motion planning
The objective of this work is to augment the basic abilities of a robot ...
read it

Planning to Give Information in Partially Observed Domains with a Learned Weighted Entropy Model
In many realworld robotic applications, an autonomous agent must act wi...
read it

Learning Quickly to Plan Quickly Using Modular MetaLearning
Multiobject manipulation problems in continuous state and action spaces...
read it

Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior
Bayesian optimization usually assumes that a Bayesian prior is given. Ho...
read it

Learning sparse relational transition models
We present a representation for describing transition models in complex ...
read it

Effect of Depth and Width on Local Minima in Deep Learning
In this paper, we analyze the effects of depth and width on the quality ...
read it

Look before you sweep: Visibilityaware motion planning
This paper addresses the problem of planning for a robot with a directio...
read it

Elimination of All Bad Local Minima in Deep Learning
In this paper, we theoretically prove that we can eliminate all suboptim...
read it

Differentiable Algorithm Networks for Composable Robot Learning
This paper introduces the Differentiable Algorithm Network (DAN), a comp...
read it

Learning compositional models of robot skills for task and motion planning
The objective of this work is to augment the basic abilities of a robot ...
read it

Visual Prediction of Priors for Articulated Object Interaction
Exploration in novel settings can be challenging without prior experienc...
read it
Leslie Pack Kaelbling
is this you? claim profile
The American roboticist Leslie Pack Kaelbling is a panasonic professor of computer science and engineering at the Massachusetts Technology Institute. She is widely known for adapting Markov’s partially observable decisionmaking process for operational research for artificial intelligence and robotic application. In 1997, Kaelbling received the IJCAI Computers and Thought Award for enhanced learning in embedded control systems and the development of robot navigation programming tools. In 2000, she was elected Fellow of the Artificial Intelligence Association.
Kaelbling got an A. B. B. In 1983 in Philosophy and Ph.D. In 1990, both at Stanford University in Computer Science. During this time she also became a member of the Center for Language and Information Studies. Before joining Brown University, she then worked at SRI International and the associated robotics spinoff Teleos Research. In 1999, she left Brown to join the professorship at MIT. Her research focuses on policymaking, machine learning and sensing with robotics applications.
She and twothirds of the Kluwerowned journal Machine Learning, both in Spring 2000, resigned in protest at their paytoaccess archives, while the financial compensation for authors was simultaneously limited. Kaelbling cofounded and served as the Journal of Machine Learning Research’s first editorinchief, an open access journal reviewed by peers, covering the same topics. It permits researchers to freely publish and retain copyright in their online archives. In response to the mass resignation, Kluwer amended its publication policy to enable authors to archive their papers online after an online review by peers. Kaelbling replied that this policy was reasonable and would have made the creation of an alternative journal unnecessary, but it was not until the threat of resignation and the founding of the JMLR that the publication policy changed, that the Editorial Board made it clear that they wanted such a policy.