Learning Predictive Representations for Deformable Objects Using Contrastive Estimation

by   Wilson Yan, et al.
berkeley college

Using visual model-based learning for deformable object manipulation is challenging due to difficulties in learning plannable visual representations along with complex dynamic models. In this work, we propose a new learning framework that jointly optimizes both the visual representation model and the dynamics model using contrastive estimation. Using simulation data collected by randomly perturbing deformable objects on a table, we learn latent dynamics models for these objects in an offline fashion. Then, using the learned models, we use simple model-based planning to solve challenging deformable object manipulation tasks such as spreading ropes and cloths. Experimentally, we show substantial improvements in performance over standard model-based learning techniques across our rope and cloth manipulation suite. Finally, we transfer our visual manipulation policies trained on data purely collected in simulation to a real PR2 robot through domain randomization.



There are no comments yet.


page 1

page 3

page 4

page 6

page 7


Learning to Manipulate Deformable Objects without Demonstrations

In this paper we tackle the problem of deformable object manipulation th...

Learning Latent Graph Dynamics for Deformable Object Manipulation

Manipulating deformable objects, such as cloth and ropes, is a long-stan...

Keep it Simple: Data-efficient Learning for Controlling Complex Systems with Simple Models

When manipulating a novel object with complex dynamics, a state represen...

Deformable Elasto-Plastic Object Shaping using an Elastic Hand and Model-Based Reinforcement Learning

Deformable solid objects such as clay or dough are prevalent in industri...

Unsupervised Feature Learning for Manipulation with Contrastive Domain Randomization

Robotic tasks such as manipulation with visual inputs require image feat...

3D Neural Scene Representations for Visuomotor Control

Humans have a strong intuitive understanding of the 3D environment aroun...

Amodal 3D Reconstruction for Robotic Manipulation via Stability and Connectivity

Learning-based 3D object reconstruction enables single- or few-shot esti...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Robotic manipulation of rigid objects has received significant interest over the last few decades, from grasping novel objects in clutter [28, 25, 47, 39, 12] to dexterous in-hand manipulation [22, 2, 59]. However, the objects we interact within our daily lives are not always rigid. From putting on clothes to packing a shopping bag, we constantly need to manipulate objects that deform. Even seemingly rigid objects like metal wires significantly deform during everyday interactions. As a result, there has been a growing interest in algorithms that can tackle deformable object manipulation  [54, 15, 42, 43, 46, 58, 45, 29, 50].

Deformable object manipulation presents two key challenges for robots. First, unlike rigid objects, there is no direct representation of the state. Consider the manipulation problem in Figure LABEL:fig:intro, where the robot needs to straighten a rope from a start configuration to any goal configuration. How does one track the shape of the rope? This lack of a canonical state often limits representations to discrete approximations [3]. Second, the dynamics of deformable objects are complex and non-linear [9]. Due to microscopic interactions within the object, even simple objects can exhibit complex and unpredictable behavior [38], which makes modeling and performing traditional task and motion planning with such deformable objects difficult.

One class of techniques that circumvents the challenges in state estimation and dynamics modeling is image-based model-free learning [13, 44, 26]. For instance, Matas et al. [30], Seita et al. [45], Wu et al. [58] use model-free methods in simulation for several difficult cloth manipulation tasks. However, without expert demonstrations, model-free learning is notoriously inefficient [7], and often needs millions of samples to learn from. This challenge is further exacerbated in the multi-task learning framework, where the robot needs to learn to reach multiple goals.

Model-based techniques, on the other hand, have shown promise in sample-efficient learning [57, 4, 35]. However, using such model-based learning techniques for deformable objects necessitates tackling the challenges of state representation and dynamics modeling head-on. So how does one learn models given high-dimensional observations and complex underlying dynamics? Some approaches take a direct approach to learning complex dynamics models through pixel-space [19, 8]. Another approach, by Agrawal et al. [1], Nair et al. [36], learns forward dynamics models in conjunction with inverse dynamic models for manipulating deformable objects. However, during robotic execution, only the inverse model is used. Other model-based approaches such as Wang et al. [56] train Causal InfoGANs [23, 6] to both extract visual representations and forward models, and use the learned forward models for planning. However, these techniques are not robust due to training instabilities associated with GANs [49].

In this paper, we introduce a new visual model-based framework that uses contrastive optimization to jointly learn both the underlying visual latent representations and the dynamics models for deformable objects. We hypothesize that using contrastive methods for model-based learning achieves better generalization and latent space structure do to its inherent information maximization objective. We re-frame the objective introduced in contrastive predictive coding [37] to allow for learning effective model dynamics and latent representations. Once the latent models for representations and dynamics are learned across offline random interactions, we use standard model predictive control (MPC) with one-step predictions to manipulate deformable objects to desired visual goal configurations. Given this controller, we empirically demonstrate substantial improvements over standard model-based learning approaches across multi-goal rope and cloth spreading manipulation tasks. Videos of our real robot runs and reference code can be found on the project website: https://sites.google.com/view/contrastive-predictive-model.

In summary, we present three key contributions in this paper: (a) We propose a contrastive predictive modeling approach to model learning that is compatible with model predictive control. To our knowledge, this is the first use of contrastive estimation for model-based learning. (b) We demonstrate substantial improvements in multi-task deformable object manipulation over other model learning approaches. (c) We show the applicability of our method to real robot rope and cloth manipulation tasks by using sim-to-real transfer without additional real-world training data.

Ii Related Work

Ii-a Deformable Object Manipulation

There has been a substantial amount of prior work in the area of robotic manipulation of deformable objects. A detailed survey of past work can be found in Khalil and Payeur [20], Henrich and Wörn [15].

A standard approach to tackling deformable object manipulation is to use deformable object simulations with planning methods [18]. Past work in this domain has focused on simple linear deformable objects [41, 55, 34], creating better simulations [40], and faster planning [11]. However, the large number of states for deformable objects makes it difficult to plan correctly while being computationally efficient.

Instead of directly planning on the full dynamics, some prior research has focused on planning on simpler approximations, by using local controllers to handle the actual complex dynamics. One approach to using local controllers is model-based servoing [48, 54], where the end-effector is controlled to a goal location instead of explicit planning. However, since the controller is optimized over simple dynamics, it often gets stuck in local minima with more complex dynamics [32]. To solve this, several works [3, 31] have proposed Jacobian controllers that do not need explicit models, while [17, 16] have proposed learning-based techniques for servoing. We note that our proposed work on learning latent dynamics models is compatible with several of these model-based optimization techniques.

Ii-B Contrastive Prediction

Learning good representations remains a difficult challenge in deformable object manipulation. There has been a large amount of prior work on contrastive predictive methods to learn better representations of data. Word2Vec [33] optimizes a contrastive loss to demonstrate semantic and syntactic structure in the learned latent space for words. Oord et al. [37] shows that it is possible to learn high-level representations of images, video, and speech data by employing a large number of negative samples. Tian et al. [52] learns high-level representations by encouraging different views of scenes to be embedded close to one another, and further from others through a similarly framed contrastive loss. Recently, SimCLR [5]

, another contrastive learning framework, achieved state-of-the-art results in self-supervised learning representations, bridging the gap with supervised learning.

Iii Contrastive Forward Modeling (CFM)

In this section, we describe our proposed framework for learning deformable object manipulation: Contrastive Forward Modeling (CFM). We begin by discussing formalism for predictive modeling and contrastive learning. Following that, we discuss our method for learning contrastive predictive models. See Figure -1 for an overview of our training scheme.

Fig. -1: Overview of our contrastive forward model. Training data consists of (image, next image, action) tuples and we learn the encoder and forward model jointly. The contrastive loss objective brings the positive embedding pairs closer together and the negative embeddings further away.

Iii-a Dynamic Predictive Models

For our problem setting, we consider a fully observable environment with observations , actions , and deterministic transition dynamics . We would like to learn a predictive model to approximate the observation of the next timestep. This can be done by directly learning a visual model through pixel space with regression over observation-action-observation tuples [10, 19]. Once we have successfully learned a predictive model, it is natural to use it for planning to reach different desired goal states, for example, different configurations of a rope or cloth. However, planning directly through pixel space can be difficult, as pixel-value comparisons between images usually do not necessarily correlate well with their true distances. For example, consider an environment with a ball, where the task is to learn a policy that pushes the ball to the center. If the ball is far from the center, then all predicted next actions using a visual forward model would be equidistant from the goal ball-in-center image when comparing pixels values since there would be no image overlap. Therefore, we consider the framework of planning with in a learned latent space by encoding observations. We learn an encoder to embed our observations into a latent space, coupled with a predictive model in latent space between ’s, where our learned predictive model is now formulated as . In this work, we propose to learn the latent space using a contrastive learning method.

Iii-B Contrastive Models

In our contrastive learning framework, we jointly learn an encoder and a forward model . We use the InfoNCE contrastive loss described by Oord et al. [37].


where is some similarity function between the computed embeddings from the encoder. The represents negative samples, which are incorrect embeddings of the next state, and we use such negative samples in our loss. The motivation behind this learning objective lies with maximizing mutual information between the predicted encodings and their respective positive samples. Within the embedding space, this results in the positive sample pairs being aligned together but the negative samples pushed further apart, as seen in Figure -1. Since we are jointly learning a forward model that seeks to minimize , we use the similarity function:


where the norm is a -norm. After learning the encoder and dynamics model, we plan using a simple version of Model Predictive Control (MPC), where we sample several actions, run them through the forward model from the current , and choose the action that produces closest (in -distance) to the goal embedding.

Iv Experimental Evaluations

In this section, we experimentally evaluate our method in various rope and cloth manipulation settings, both in simulation and in the real world. Our experiments seek to address the following questions:

  • Do contrastive learning methods learn better latent spaces and forward models for planning in deformable object manipulation tasks?

  • What aspects of our contrastive learning methods contribute the most to performance?

  • Can we successfully manipulate deformable objects on a real-world robot?

Iv-a Environments and Tasks

To simulate deformable objects such as cloth and rope, we used the Deep Mind Control [51] platform with MuJoCo 2.0 [53]. We use an overhead camera that renders RGB images as input observations for training our method.

We design the following tasks in simulation:

1. Rope: The rope is represented by 25 geoms in simulation with a four-dimensional action space: the first are the pixel pick point on the rope, and the last are the delta direction to perturb the rope. At the start of each episode, the rope’s state is randomly initialized by applying 120 random actions.

2. Cloth: The cloth is represented by a grid of geoms in simulation with a five-dimensional action space: the first are the pixel pick point on the cloth, and the last are the delta direction to perturb the cloth. At the start of each episode, the cloth’s state is randomly initialized by applying random actions. In MuJoCo 2.0, the skin of the cloth can be changed by using images taken of a real cloth.

For both rope and cloth environments, we evaluate our method by planning to a desired goal state image and computing the sum of the pairwise geom distances between the achieved and true goal states. We observe that taking an average of 1000 trials suffices to maintain high-confidence evaluation estimates.

Fig. 0: Trajectories for each of the baselines within the simulator, all starting from the same start state and having the same end goal of a horizontal line. Each trajectory was run for 20 actions. Note that our method (CFM) reaches the goal state significantly faster than the baselines.
Rope Cloth
Horizontal Vertical 45° 135° Random Flat Random
Random Policy
Joint Dynamics Model
Visual Forward Model
CFM (Ours)
Rope (With DR) Cloth (With DR)
Horizontal Vertical 45° 135° Random Flat Random
Random Policy
Joint Dynamics Model
Visual Forward Model
CFM (Ours)
TABLE I: Quantitative comparisons between different model-based learning methods on rope and cloth manipulation tasks. The metric is the sum of pairwise geom distances between the final observation and goal state, where lower distance is more accurate.

Iv-B Data Collection

Since collecting real-world data on robots is expensive, our method seeks to address this problem by collecting randomly perturbed rope and cloth data in simulation. Using random perturbations allows for a diverse set of deformable objects and interactions for learning the latent space and dynamics model. We collect 4000 trajectories of length 50 for rope (200k samples), and 8000 trajectories of length 50 for cloth (400k samples).

Iv-C Baselines

To show the substantial improvements of our model over prior methods, we compare our method against several baselines: a random policy, a visual forward model, an autoencoder trained jointly with a latent dynamics model, PlaNet [14], and a joint dynamics model [1]. In order to ensure that pick points are always on the rope or cloth, we constrain our pick points using a binary segmentation of the observation image computed by RGB thresholding. During plannig, all methods use MPC with one-step prediction.

  • Random Policy: We sample pick actions uniformly over the binary segmentation, and place actions are sampled uniformly random in a unit square centered around the pick location.

  • Visual Forward Model: We train a forward model similar to Kaiser et al. [19] to perform modeling and planning purely through pixel space.

  • Autoencoder: We learn a simple latent space model by jointly training a classical autoencoder with a forward dynamics model. The autoencoder learns to minimize the -distance between reconstructed and actual images [24].

  • PlaNet: We train PlaNet [14], a stochastic variant of an autoencoder, as another latent space model. PlaNet models a sequential VAE and optimizes a temporal variational lower bound.

  • Joint Dynamics Model: We jointly learn a forward and inverse model following Agrawal et al. [1].

For consistency across all latent space models, we use a latent size of for both the rope and the cloth environments. For all methods, we sample possible one-step trajectories when performing closed-loop planning. See Figure IV-A for example trajectories from each baseline in comparison to our method.

Iv-D Training Details

We used the same encoder architectures for all models. The encoder architecture is a series of 6 2D convolutions of kernel sizes

, strides

, and filter sizes

respectively. We add Leaky ReLU 

[27] activation in between each convolutional layer. Finally, the output is flattened and fed into a fully connected layer to produce the latent

. The forward model is a multi-layer perceptron (MLP) with two hidden layers of size

which outputs the parameters for a linear transformation on

. Specifically for our method (CFM), we use the other batch elements as our negative samples for a total of negative samples per positive pair. For PlaNet, following Hafner et al. [14], the decoder architecture is a dense layer followed by transposed convolutions with kernel size and stride to upscale to the size of the image. The visual forward models follow the same convolutional encoder and decoder architectures as the previous model, with action conditioning implemented in a similar way to Kaiser et al. [19] where actions are processed by a separate dense layer at each resolution, multiplied channel-wise, and broadcasted spatially. The images and actions were centered and scaled to the range of . We trained all models with batch size 128, learning rate , and an Adam optimizer [21] for epochs. Each model was trained on a single NVIDIA TitanX GPU, taking roughly

hours. All of our simulated environments, evaluation metrics, and training code will be publicly released.

Iv-E Does Using Contrastive Models Improve Performance?

In this section, we compare the results of using our method with those of our baselines, analyzing the advantages and benefits that contrastive models bring over prior methods. Consider a naive baseline where we replace the InfoNCE loss with an MSE loss. This is equivalent to jointly fitting an encoder and dynamics model that minimizes

. We can see that the optimal solution is for the encoder to encode all observations to a constant vector to achieve zero loss. To prevent this form of a degenerate solution, we are required to regularize our latent space in some way. Both prior methods and contrastive learning do this in different ways so we analyzed which methods performed better over others. Table 

I shows the quantitative results comparing our method against baselines in different rope and cloth environments, with and without domain randomization for robot transfer. Note that our method does better on all randomly sampled goals with and without domain randomization, indicating stronger generalization in latent spaces for planning. Figure IV-A shows example simulator trajectories for each baseline. Each trajectory has the same starting location, same goal image, and was run for 20 actions.

An autoencoder regularizes its latent space by requiring additionally training a decoder to learn to reconstruct from . The model does well in some scenarios, such as a diagonal, but performs poorly when domain randomization is introduced to allow for transfer to a real robot. This is most likely because the autoencoder is optimized to have pixel-level perfect reconstructions, so features such as lighting and color must be encoded in the latent space even when they are not needed for the task. PlaNet behaves similarly to the autoencoder, as it is also a form of a stochastic autoencoder. It performs reasonably competitive with our method but again fails when domain randomization is introduced.

The joint dynamics model regularizes its latent space by jointly learning an inverse model with the forward model. The joint model performed the best across all the baselines when moving to domain randomized data. However, our method still outperforms the joint model for every task.

The visual forward model is the only method that plans in pixel space. It generally performs poorly for tasks with objects with low area coverage, such as the different rope goal orientations, but does better than our method on the cloth flattening task. However, since the forward model operates purely in pixel space, it unsurprisingly suffers from a sharp degradation in performance when introducing domain randomization. As such, it generalizes poorly to the real robot setting.

Rope Cloth
Log-bilinear Similarity
TABLE II: Ablation experiments on forward model architecture and similarity functions, using the same evaluation metric as Table I (lower is better).
Fig. 1: Each row represents one trajectory using our contrastive forward policy. The rope task uses 40 actions between the start and final states, while the cloth task uses 100 actions.
Robot Experiments (Intersection in pixels) Rope (Horizontal) Rope (Vertical) Rope (45°) Rope (135°) Rope (Squiggle) Cloth (Flat)
Random Policy 6.880 14.727 13.662 4.266 0.049 462.513
Autoencoder 5.526 3.334 3.862 7.499 3.419 603.927
Joint Dynamic Model 17.722 23.636 33.631 21.267 18.311 772.303
Contrastive Forward Model (Ours) 32.827 36.387 33.891 38.952 20.711 1001.082
TABLE III: The maximum intersection area in pixels between the goal image and observation images averaged over all seeds

Iv-F Ablations on Contrastive Models

In this section, we perform an ablation study on our method, examining the impact of architectural designs on performance. We ablate over two aspects of our method: the forward model architecture, and the contrastive similarity function. For the forward model, our method uses a Multi-Layer Perceptron (MLP) that outputs the parameters of a linear function that is then applied to . For the contrastive similarity function, our method follows Equation 2. The quantitative results, measured as the sum of pairwise geom distances between the final and goal images, appear in Table II.

Iv-F1 Contrastive Similarity Functions

We compare using our similar function with the original InfoNCE similarity function in Oord et al. [37], the log-bilinear similarity function . We achieve the largest boost in performance when switching to our similarity function, as it is more in line with the minimization objective of learning a correct forward model, whereas the log-bilinear model only encourages alignment (as opposed to closeness) of embedding vectors.

Iv-F2 Forward models architectures

We experiment with a few different forward model architectures: linear, a small MLP, and a small MLP that outputs parameters for a linear transformation. As expected, the biggest drop in performance occurs when learning the simpler linear dynamics model, and a slight drop when using an MLP for both rope and cloth tasks. This demonstrates the need for more complex models for latent forward-dynamics learning.

Fig. 2: Rope trajectories for each baseline and our method applied on a real robot, all with the same start state, and goal rope orientation.

Iv-G Real Robot Experiments

Iv-G1 Real Robot Setup

We use a PR2 robot to perform our experiments and an overhead camera looking down on the deformable objects to get the RGB image inputs. To ensure the policy learned in the simulator transfers over to the real world, we apply domain randomization by changing the lighting, texture, friction, damping, inertia, and mass of the object during every training step within the simulator. We also use a pick and place strategy to mimic the same four-dimensional actions within the simulator.

To compute the actions, we employ a model predictive control (MPC) approach of replanning our action at each time step based on the previous image. We segment the rope/cloth against the background to get the list of valid pick locations of the object. We then generate possible actions by uniformly sampling 100 random deltas in combined with randomly chosen start locations. We feed these into our forward model along with the encoding of our start image to get the latent encoding for each of the next prospective states. To pick the optimal action, we find the location and delta that minimizes the Euclidean distance from these next states to our goal state and return this action to the robot. The delta from the policy is on the scale of for both and coordinates, and we rescale this to pixels. On the robot side, we use a learned linear mapping to transform from the image’s pixel values to Cartesian coordinates that the robot uses. To emulate the simulator, the robot’s left arm motion is to go to the start location, go down and close the gripper, move up, move to the new location, move down and open the gripper, where the height of the gripper is hard-coded to some manually tuned value.

Iv-G2 Evaluation Metrics

We use three baselines along with our contrastive method for real-world evaluation. The first is random actions and the others are the two policies that performed the best with domain randomization: the autoencoder and the joint dynamics model [1]. For the rope, all the models are evaluated on five goal states: horizontal, vertical, straight line at , straight line at , and a squiggly rope on left. For the cloth, the models are evaluated on one goal state, a flat blue cloth with no rotation. The metric we use is the intersection in pixels between the segmented final image and the segmented goal image. We prefer this instead of intersection over union (IOU) since the objects have the same shape so the union normalization is unnecessary. Additionally, the simpler intersection values provide more insight for comparisons than IOU. The models are run for 40 actions on the rope or 100 actions on the cloth, and the image after each action is stored as an observation. Among all the observations, the one with the highest intersection with the goal is chosen for each method. To account for different seeds, we use 4 starting locations for our contrastive method and 2 starting locations for the baselines, with the scores being averaged across the different start locations. For the cloth, the seed also involves different colors of cloths (blue, gold, white).

The specific evaluation metrics are found in Table III which shows that our model performed the best for all the rope and cloth tasks. The joint dynamics model is the second best and got close results to ours on the and squiggle rope tasks. Some example trajectories from our model are seen from a forward view in Figure LABEL:fig:intro and from an overhead view in Figure 1. Visual comparisons between our method and baseline methods on the real robot are found in Figure 2. We see that our method more accurately plans towards correct goal states compared to the baselines.

V Conclusion

In this paper, we propose a contrastive learning approach for predictive modeling of deformable objects. We show that contrastive learning learns stronger and more plannable latent representations compared to existing methods. Since our method only requires collecting random data in an environment, it allows for easier transfer to real robots without the need for real-world training.


We thank AWS for computing resources. We also gratefully acknowledge the support from Berkeley DeepDrive, NSF, and the ONR Pecase award.


  • [1] P. Agrawal, Nair,Ashvin, Abbeel,Pieter, Malik,Jitendra, and Levine,Sergey (2016) Learning to poke by poking: experiential learning of intuitive physics.. NIPS. Cited by: §I, 5th item, §IV-C, §IV-G2.
  • [2] M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, et al. (2018) Learning dexterous in-hand manipulation. arXiv preprint. Cited by: §I.
  • [3] D. Berenson (2013) Manipulation of deformable objects without modeling and simulating deformation. In IROS, Cited by: §I, §II-A.
  • [4] E. F. Camacho and C. B. Alba (2013) Model predictive control. Springer Science & Business Media. Cited by: §I.
  • [5] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. arXiv preprint. Cited by: §II-B.
  • [6] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In NIPS, Cited by: §I.
  • [7] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel (2016)

    Benchmarking deep reinforcement learning for continuous control

    In ICML, Cited by: §I.
  • [8] F. Ebert, C. Finn, S. Dasari, A. Xie, A. Lee, and S. Levine (2018) Visual foresight: model-based deep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568. Cited by: §I.
  • [9] N. Essahbi, B. C. Bouzgarrou, and G. Gogu (2012) Soft material modeling for robotic manipulation. In Applied Mechanics and Materials, Cited by: §I.
  • [10] C. Finn and S. Levine (2017) Deep visual foresight for planning robot motion. In ICRA, Cited by: §III-A.
  • [11] B. Frank, C. Stachniss, N. Abdo, and W. Burgard (2011) Efficient motion planning for manipulation robots in environments with deformable objects. In IROS, Cited by: §II-A.
  • [12] A. Gupta, A. Murali, D. P. Gandhi, and L. Pinto (2018) Robot learning in homes: improving generalization and reducing dataset bias. In NeurIPS, Cited by: §I.
  • [13] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, et al. (2018) Soft actor-critic algorithms and applications. arXiv preprint. Cited by: §I.
  • [14] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson (2018) Learning latent dynamics for planning from pixels. arXiv preprint. Cited by: 4th item, §IV-C, §IV-D.
  • [15] D. Henrich and H. Wörn (2012) Robot manipulation of deformable objects. Springer Science & Business Media. Cited by: §I, §II-A.
  • [16] Z. Hu, P. Sun, and J. Pan (2018) Three-dimensional deformable object manipulation using fast online gaussian process regression. RAL. Cited by: §II-A.
  • [17] B. Jia, Z. Hu, Z. Pan, D. Manocha, and J. Pan (2018) Learning-based feedback controller for deformable object manipulation. arXiv preprint. Cited by: §II-A.
  • [18] P. Jiménez (2012) Survey on model-based manipulation planning of deformable objects. Robotics and computer-integrated manufacturing. Cited by: §II-A.
  • [19] L. Kaiser, M. Babaeizadeh, P. Milos, B. Osinski, R. H. Campbell, K. Czechowski, D. Erhan, C. Finn, P. Kozakowski, S. Levine, et al. (2019) Model-based reinforcement learning for atari. arXiv preprint. Cited by: §I, §III-A, 2nd item, §IV-D.
  • [20] F. F. Khalil and P. Payeur (2010) Dexterous robotic manipulation of deformable objects with multi-sensory feedback-a review. In Robot Manipulators Trends and Development, Cited by: §II-A.
  • [21] D. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-D.
  • [22] V. Kumar, E. Todorov, and S. Levine (2016) Optimal control with learned local models: application to dexterous manipulation. In ICRA, Cited by: §I.
  • [23] T. Kurutach, A. Tamar, G. Yang, S. J. Russell, and P. Abbeel (2018) Learning plannable representations with causal infogan. In NeurIPS, Cited by: §I.
  • [24] S. Lange and M. Riedmiller (2010)

    Deep auto-encoder neural networks in reinforcement learning

    In IJCNN, Cited by: 3rd item.
  • [25] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen (2016)

    Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection

    ISER. Cited by: §I.
  • [26] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2015) Continuous control with deep reinforcement learning. arXiv preprint. Cited by: §I.
  • [27] A. L. Maas, A. Y. Hannun, and A. Y. Ng (2013) Rectifier nonlinearities improve neural network acoustic models. In ICML, Cited by: §IV-D.
  • [28] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry, K. Kohlhoff, T. Kröger, J. Kuffner, and K. Goldberg (2016) Dex-net 1.0: a cloud-based network of 3d objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In ICRA, Cited by: §I.
  • [29] J. Maitin-Shepard, M. Cusumano-Towner, J. Lei, and P. Abbeel (2010) Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding. In ICRA, Cited by: §I.
  • [30] J. Matas, S. James, and A. J. Davison (2018) Sim-to-real reinforcement learning for deformable object manipulation. arXiv preprint. Cited by: §I.
  • [31] D. McConachie and D. Berenson (2018) Estimating model utility for deformable object manipulation using multiarmed bandit methods. IEEE Transactions on Automation Science and Engineering. Cited by: §II-A.
  • [32] D. McConachie, M. Ruan, and D. Berenson (2017) Interleaving planning and control for deformable object manipulation. In ISRR, Cited by: §II-A.
  • [33] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NIPS, Cited by: §II-B.
  • [34] M. Moll and L. E. Kavraki (2006) Path planning for deformable linear objects. T-RO. Cited by: §II-A.
  • [35] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine (2018) Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In ICRA, Cited by: §I.
  • [36] A. Nair, D. Chen, P. Agrawal, P. Isola, P. Abbeel, J. Malik, and S. Levine (2017) Combining self-supervised learning and imitation for vision-based rope manipulation. In ICRA, Cited by: §I.
  • [37] A. v. d. Oord, Y. Li, and O. Vinyals (2018) Representation learning with contrastive predictive coding. arXiv preprint. Cited by: §I, §II-B, §III-B, §IV-F1.
  • [38] P. Pierański, S. Przybył, and A. Stasiak (2001) Tight open knots. The European Physical Journal E. Cited by: §I.
  • [39] L. Pinto and A. Gupta (2016) Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours. ICRA. Cited by: §I.
  • [40] S. Rodriguez, X. Tang, J. Lien, and N. M. Amato (2006) An obstacle-based rapidly-exploring random tree. In ICRA, Cited by: §II-A.
  • [41] M. Saha and P. Isto (2007) Manipulation planning for deformable linear objects. T-RO. Cited by: §II-A.
  • [42] J. Schulman, J. Ho, C. Lee, and P. Abbeel (2013) Generalization in robotic manipulation through the use of non-rigid registration. In ISRR, Cited by: §I.
  • [43] J. Schulman, A. Lee, J. Ho, and P. Abbeel (2013) Tracking deformable objects with point clouds. In ICRA, Cited by: §I.
  • [44] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz (2015) Trust region policy optimization. In ICML, Cited by: §I.
  • [45] D. Seita, A. Ganapathi, R. Hoque, M. Hwang, E. Cen, A. K. Tanwani, A. Balakrishna, B. Thananjeyan, J. Ichnowski, N. Jamali, K. Yamane, S. Iba, J. Canny, and K. Goldberg (2019)

    Deep imitation learning of sequential fabric smoothing policies

    arXiv preprint. Cited by: §I, §I.
  • [46] D. Seita, N. Jamali, M. Laskey, A. K. Tanwani, R. Berenstein, P. Baskaran, S. Iba, J. Canny, and K. Goldberg (2018)

    Deep transfer learning of pick points on fabric for robot bed-making

    arXiv preprint. Cited by: §I.
  • [47] K. B. Shimoga (1996) Robot grasp synthesis algorithms: a survey. IJRR. Cited by: §I.
  • [48] J. Smolen and A. Patriciu (2009) Deformation planning for robotic soft tissue manipulation. In Advances in Computer-Human Interactions, Cited by: §II-A.
  • [49] A. Srivastava, L. Valkov, C. Russell, M. U. Gutmann, and C. Sutton (2017) Veegan: reducing mode collapse in gans using implicit variational learning. In NIPS, Cited by: §I.
  • [50] J. Stria, D. Prusa, V. Hlavac, L. Wagner, V. Petrik, P. Krsek, and V. Smutny (2014) Garment perception and its folding using a dual-arm robot. In IROS, Cited by: §I.
  • [51] Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. d. L. Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, et al. (2018) Deepmind control suite. arXiv preprint. Cited by: §IV-A.
  • [52] Y. Tian, D. Krishnan, and P. Isola (2019) Contrastive multiview coding. arXiv preprint. Cited by: §II-B.
  • [53] E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In IROS, Cited by: §IV-A.
  • [54] T. Wada, S. Hirai, S. Kawamura, and N. Kamiji (2001) Robust manipulation of deformable objects by a simple pid feedback. In ICRA, Cited by: §I, §II-A.
  • [55] H. Wakamatsu, E. Arai, and S. Hirai (2006) Knotting/unknotting manipulation of deformable linear objects. IJRR. Cited by: §II-A.
  • [56] A. Wang, T. Kurutach, K. Liu, P. Abbeel, and A. Tamar (2019) Learning robotic manipulation through visual planning and acting. RSS. Cited by: §I.
  • [57] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou (2017) Information theoretic mpc for model-based reinforcement learning. In ICRA, Cited by: §I.
  • [58] Y. Wu, W. Yan, T. Kurutach, L. Pinto, and P. Abbeel (2019) Learning to manipulate deformable objects without demonstrations. arXiv preprint. Cited by: §I, §I.
  • [59] H. Yousef, M. Boukallel, and K. Althoefer (2011) Tactile sensing for dexterous in-hand manipulation in robotics—a review. Sensors and Actuators A: physical. Cited by: §I.