Extended Version of GTGraffiti: Spray Painting Graffiti Art from Human Painting Motions with a Cable Driven Parallel Robot

by   Gerry Chen, et al.
Georgia Institute of Technology

We present GTGraffiti, a graffiti painting system from Georgia Tech that tackles challenges in art, hardware, and human-robot collaboration. The problem of painting graffiti in a human style is particularly challenging and requires a system-level approach because the robotics and art must be designed around each other. The robot must be highly dynamic over a large workspace while the artist must work within the robot's limitations. Our approach consists of three stages: artwork capture, robot hardware, and planning control. We use motion capture to capture collaborator painting motions which are then composed and processed into a time-varying linear feedback controller for a cable-driven parallel robot (CDPR) to execute. In this work, we will describe the capturing process, the design and construction of a purpose-built CDPR, and the software for turning an artist's vision into control commands. Our work represents an important step towards faithfully recreating human graffiti artwork by demonstrating that we can reproduce artist motions up to 2m/s and 20m/s^2 within 9.3mm RMSE to paint artworks. Added material not in the original work is colored in red.



page 1

page 2

page 4


Teleoperation of a Humanoid Robot with Motion Imitation and Legged Locomotion

This work presents a teleoperated humanoid robot system that can imitate...

Robust Impedance Control for Dexterous Interaction Using Fractal Impedance Controller with IK-Optimisation

Robust dynamic interactions are required to move robots in daily environ...

Design and Characterization of the Dynamic Robotic Actuator Dyrac

A new variable stiffness actuator (VSA) is presented, designed for repro...

Online Trajectory Optimization for Dynamic Aerial Motions of a Quadruped Robot

This work presents a two part framework for online planning and executio...

Expectable Motion Unit: Avoiding Hazards From Human Involuntary Motions in Human-Robot Interaction

In robotics, many control and planning schemes have been developed that ...

Motron: Multimodal Probabilistic Human Motion Forecasting

Autonomous systems and humans are increasingly sharing the same space. R...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction and Related Work

Spray painting graffiti art in a human style is an important, open problem that requires a systems approach. In this paper, we take the first step towards creating a system that can capture human graffiti artwork and collaborate with artists to create new and copied artworks in the public settings that define graffiti. In addition to the well-established sociological motivations for reproducing graffiti art [1], robot art is intrinsically motivating for its marriage of art and technology. By possessing physical abilities beyond those of its collaborating artists, a graffiti robot could reveal new artistic avenues highlighting human-robot collaboration for disabled [2] and able-bodied artists alike. To act as the hand of an artist poses inherently interconnected problems in art, hardware, control, and human-robot interaction. Furthermore, the technology required to produce the large-scale, dynamic motions required for graffiti has applications in warehouse/industrial logistics [3], agriculture [4], construction [5, 6], and motion simulation [7]. Creating graffiti art with a robot requires (1) capturing the motions of artists painting, (2) creating a robot that can achieve comparable motions to human artists, and (3) implementing algorithms that would allow the robot to execute on the artists’ visions. Despite considerable progress in each of these tasks, to our knowledge, no system has been demonstrated to achieve all three.

Fig. 1: Our system captures artist painting motions of individual letter outlines which are composed and processed into controls for a cable robot to execute. Our system produced this painting of the letters “ATL” (Atlanta).

Prior work exists in capturing graffiti art, most notably the Graffiti Markup Language (GML) project [8]. Although the project has been successful in generating a large library of graffiti artwork, almost all the data was captured from digital interfaces (e.g. stylus) rather than full-body painting motions. This is problematic because an artist’s creative process may differ between virtual and physical mediums and because ignoring the physical painting motions neglects the challenge of generating robot trajectories. An exception is the GML Recording Machine [9]

, though it captures only 2 degree of freedom (DoF) planar motion.

Robots developed for spray painting have seen considerably more attention. Serial arm manipulators and gantry-based systems are precise and mature, but arms do not scale well to large workspaces [10, 11] and gantry-based systems exhibit a tradeoff between size and portability [12, 13]. Mobile manipulators address these issues, but are currently not as dynamic or precise as human artists [14]. Aerial robots are popular for their ability to paint otherwise inaccessible walls, but have been cited as being difficult to accurately control due to susceptibility to disturbances and comparatively limited acceleration capabilities [15, 16, 17, 18]. Cable-based systems appear to be promising, but so far [19, 2] have only demonstrated raster- or stippling- style painting while [20] has not demonstrated the highly dynamic motions employed by human artists.

Finally, despite prior research in robot control and artistic composition, the software to enable graffiti painting does not currently exist. CDPR control (further discussed in Section IV-A3) is relatively well understood, but has not been demonstrated for dynamic graffiti trajectories. Research on industrial painting robots has thoroughly studied paint dispersion and trajectory generation, but is primarily concerned with uniform coats on curved surfaces in contrast to graffiti art’s non-uniform coats on flat surfaces [21, 22, 23, 24]. Berio is notable for his research in graffiti composition and stylization [25, 11, 26, 27], but focuses on digital rendering as opposed to producing trajectories.

We argue that the problem of creating graffiti artwork is sufficiently expansive and its components codependent that it requires a system-level approach. In this work, we propose a novel system towards creating graffiti artwork by improving and coordinating the capture, hardware, and software requirements. Figure 1 depicts an example result of the GTGraffiti system summarized in Figure 2. Our contributions include:

  • capturing a library of 6 DoF trajectories for creating graffiti artwork using motion capture (mocap),

  • designing, building, and testing the hardware for a purpose-made robot platform to paint graffiti,

  • proposing a planning and control pipeline to realize high-level artistic descriptions into motor torque commands, and

  • demonstrating a system that can paint human-style graffiti artwork.

Fig. 2: This system overview depicts the capture, hardware, and planning & control components of our system.

Ii Capture

The capture process is important for both learning the motions to produce graffiti art and establishing robot capability requirements. As such, our capture process is focused on obtaining the most artistically meaningful data while omitting less relevant data. In this work, we collect a library of simple, composable shape outlines.

Ii-a Design Considerations

We first capture artwork using an OptiTrack™ mocap system for its simplicity and accuracy. Mocap systems have the advantage of directly outputting positions and orientations of rigid bodies with sub-millimeter accuracy which trivializes the process of obtaining the 6D trajectories of a spray paint can during painting. As we will discuss in Section IV

, the processing and rendering components of our pipeline can optionally use other forms of captured artwork such as Scalable Vector Graphics (SVG) and GML files in addition to mocap data.

We opt to capture only the outlines of shapes and omit the infills because, according to an artist collaborator, the particular pattern used to fill-in a shape is largely arbitrary and algorithmically generating one does not significantly detract from artistic value. Furthermore, the easiest infill path for a human may not be the easiest for a robot.

Ii-B Approach

Ii-B1 Data Collection Procedure

We collected the full 6D trajectories of the spray cans and painting surfaces (plywood sheets) as two graffiti artist collaborators painted. Four mocap position markers were affixed each to the can (as shown in Figure 3) and painting surface to extract the 6DoF poses for each time step at 120Hz. For each art collaborator, the 26 letters of the English alphabet were captured along with special symbols such as punctuation marks and small doodles of the artists’ choices (e.g. skull).

Fig. 3: Mocap setup for capturing painting motions of collaborating artists.

Ii-B2 Data Pre-processing

The motion capture data is given in arbitrary “world” coordinates so we must convert the data into the painting surface’s reference frame. The top left (y-axis), bottom left (origin), and bottom right (x-axis) markers of the painting surface are used to obtain the coordinate frame, , of the surface in the world frame for each timestep, , using Gram-Schmidt Orthogonalization for x then y, then using the cross product to obtain the z-axis. Anecdotally, we found that the fourth mocap marker was never needed, though it would be useful in the event of one of the other three markers missing data. For the spray can, a similar process is performed to obtain the can’s frame, . The pose of the spray can’s nozzle in the spray can’s frame, , is obtained by manually measuring the position in the spray can’s coordinate system and assigning the identity rotation. For each timestep, , the markers are used to obtain the coordinate frames of the surface, , and can, , both in the world frame. The pose of the spray can’s nozzle in the spray can’s frame, , is obtained by manually measuring the position in the spray can’s frame and assigning the identity rotation. Finally, the nozzle’s pose in the painting surface’s frame at timestep , , can be expressed as

To determine when the spray nozzle is being depressed (painting vs traveling motions), we applied a number of heuristics for each candidate painting motion segment including distance between start and end points (assuming outlines are closed curves), maximum speed, arc length, non-maximum suppression, and manual annotations.

Ii-C Results

Figure (a)a shows an example of our data by plotting the spray can nozzle translations. The fact that the data was collected while the artists were physically painting combined with the accuracy and 6DoF of motion capture gives our data the potential to better understand the nuanced motions of human graffiti painting e.g. biomechanically and with respect to can speeds, distances, and orientations.

Fig. 6: (a) Captured shape outlines for the letters “ABCDE” by our two collaborating artists. (b) Speeds and accelerations of a human graffiti artist during painting which help inform minimum requirements for the robot.

We also study the speeds and accelerations reached by our collaborating artists during painting and traveling to inform the requirements of the cable robot. From Figure (b)b

, we estimate a maximum speed of 6m/s and acceleration of 50m/s

to be sufficient to paint human-style graffiti artwork.

Ii-D Discussion & Limitations

Although motion capture’s accuracy is unparalleled, there are several drawbacks. Most notably, cost and setup hinder the accessibility and mobility of capture systems. We were only able to capture motions in a controlled laboratory setting which limits the realism of the artwork. Additionally, human artists can move so fast that, even at 120Hz, our mocap system misses some detail.

The ability to capture nozzle actuation was also limited since we were unable to directly record actuation force which artists use to allow better paint control. During data collection, an additional marker was actually placed on the tip of the artist’s index finger to aid in identifying when the spray can nozzle was depressed. Upon analyzing the data, however, this was not found to be a consistent method of annotating binary nozzle actuation let alone actuation force. Even with various heuristics, manual annotation was needed to correct misclassifications in nozzle actuation.

Additional tags, characters, and full murals with photographs will be added to the library in the future.

Iii Robot Hardware

Given the design requirements for painting graffiti based on human spray painting data, we believe that a CDPR is an ideal platform. In this section we detail our robot hardware.

Iii-a Design Considerations

Fig. 7: Our planar CDPR has a 4-cable, rectangular configuration with the end-effector in the center carrying the spray paint can.

The primary design requirements for a graffiti painting system involve workspace size, maximum end-effector velocity, and maximum end-effector acceleration. We seek a platform which can be scaled to a workspace 20mx20m or larger, though in this work we only seek a demonstration sized at a few meters. Based on the analysis presented in Section II-C, we determined that we require 6m/s and 50m/s of speed and acceleration, respectively. Assuming the mass of the spray can and actuating accessories do not exceed 2kg, including gravity the robot should be capable of exerting 120N upward and comparable forces in other directions.

Secondary design requirements include portability, accuracy, and stiffness. It should be feasible to disassemble and reassemble the robot on-site at the wall of a building. Accuracy and stiffness are considered secondary constraints because, compared to art forms such as brush painting or sculpture, graffiti is less sensitive to positional inaccuracies and experiences less reaction force. Based on the thickness of a line painted with a “needle” nozzle 5cm from the painting surface, we estimate 2.5cm of repeatability to be sufficient. We estimate an accuracy of 1% the size of the painting to be sufficient, based loosely on [28]. We estimate external disturbances to be negligible based on paint reaction forces and historical Atlanta wind speeds.

CDPRs present ideal platforms for graffiti painting given the aforementioned requirements. A CDPR is a robot whose end-effector is pulled by a set of cables which are driven by winches on a fixed base. Due to properties of cables, CDPRs can scale to extraordinary sizes and speeds [29, 30], albeit with reduced stiffness. These qualities make them ideally suited to the large but relatively undisturbed setting and modest accuracy requirement of graffiti painting.

CDPRs also have an active research community which has solved many challenges in workspace analysis [31, 32, 33], control, and estimation (further discussed in Section IV-A3). Preliminarily, based on [3], we estimate a 1kHz update frequency to be necessary for real-time control.

Finally, we define the requirements to actuate the spray can nozzle. For a full can of Montana BLACK 400mL, the force required to depress the nozzle at the start and end of the stroke was measured to be 20N and 27N, respectively. The displacement was measured to be 2mm. Other 400mL spray cans by the brands Montana, Hardcore, and Kobra were found to have similar actuation forces and displacements.

Iii-B Approach

Iii-B1 CDPR Design

Our CDPR uses 4 cables in a planar configuration to exert pulling forces on the end effector via 4 motor-driven winches (see Figure 7). The cable mounting positions on the carriage and routing pulley locations are given in Table I.

The frame is constructed from standard 12 gauge steel strut channel to dimensions 3.05m x 2.44m x 0.61m. The four winches are 2.54cm in diameter with 1.5mm pitch helical grooves to drive 1mm Dyneema® (ultra-high molecular weight polyethylene) rope. They are driven by 150kV D6374 motors from ODrive Robotics and controlled by two ODrive v3.6 56V motor drives. The motor drives are connected via separate, isolated CANbuses to a Teensy 4.0 microcontroller (MCU) running at 600MHz which runs the primary CDPR control, sending torque commands and receiving angular position and velocity feedback to/from the motor drives. The MCU also sends binary spray commands to the spray can actuator wirelessly with an HC-05 bluetooth module. The MCU is programmed with the closed-loop controller described in Section IV-A3.

Iii-B2 End Effector Design

The end effector was built from 5mm hardwood to be lightweight and carry the spray can and actuating electronics. It consists of the six faces of a box plus two perpendicular midplanes parallel to the gravity vector to center the spray can. The 4 cables are mounted onto the midplane parallel to the painting surface via 1/4”-20 bolts.

Iii-B3 Spray Can Nozzle Actuator Design

The spray can nozzle actuating mechanism is wireless, battery-powered, and implemented using a “20kg” servo with the lever-arm mechanism from [34]. Complete design details can be found in our accompanying arXiv paper [35].

Cable Index End-effector Mounting Location (m) Routing Pulley Location (m)
1 [0.094, -0.061, 0] [1.52, -1.22, 0]
2 [0.094, 0.061, 0] [1.52, 1.22, 0]
3 [-0.094, 0.061, 0] [-1.52, 1.22, 0]
4 [-0.094, -0.061, 0] [-1.52, -1.22, 0]
TABLE I: CDPR cable configuration

Iii-C Results

Iii-C1 CDPR As-Built

Our assembled robot is pictured in Figures 1 and 8. The motors are capable of 600 rad/s and 3.86Nm, which corresponds to 7.62m/s and 94.5m/s for the 2.54cm diameter winch, the 1.96e-4 kgm motor inertia, and a 2kg end-effector mass. Thus the designed motor and winch configuration satisfies our design requirements. The communication via CANbus between the MCU and motor drives was measured to have a feedback-command round-trip latency of 626s and easily achieves the required 1kHz update frequency.

Iii-C2 End Effector As-Built

The end-effector and spray can actuating mechanism are also pictured in Figure 8. The mass of the empty end-effector was measured to be 496g, the battery for the nozzle actuator was 231g, the remaining nozzle actuator components totalled 166g, and the spray can varied from 113g to 424g depending on the fullness, brand, and part-to-part variability. Thus the total mass varied between 1006g and 1317g depending on the spray can (within our 2kg assumption).

Iii-C3 Spray Can Nozzle Actuator As-Built

The spray can actuating mechanism was able to successfully depress the spray nozzle 100% of the time in a trial of 100, 1 second long actuations. The latency from commanding to dispensing paint was measured to be 400ms.

Fig. 8: Our cable robot (left) includes an end effector that carries the spray paint and actuator electronics (center) and 4 winch assemblies, each consisting of a shared motor controller, motor, and helical winch (right, x2).

Iii-D Discussion & Limitations

We are able to paint well, as in Figure 1, despite not being able to use our 6DoF captured data to its full potential since we are limited to planar motion. Simultaneously, we will discuss in Section IV-C2 that the paint limits us to a maximum speed far below what the hardware is capable of. A combination of hardware upgrades and intelligent paint modeling and optimization are likely necessary to leverage our system’s full potential, with actuation to move the nozzle closer to the canvas being paramount.

Friction (especially static) and motor cogging are hardware issues that plague CDPR control. To reduce control difficulties, parasitic forces should be minimized where possible by ensuring smooth bearings and avoiding overpowered motors. Out-of-plane oscillations were rarely problematic in practice.

Iv Planning and Control

From an artist’s input, we must control the robot to paint. We use a hierarchical approach with 3 levels:

  1. Path Generation: turn the artist’s vision into a mathematical description

  2. Trajectory Generation: find a trajectory within the robot’s capabilities while respecting the artist’s vision to the maximum extent possible

  3. CDPR Control: execute the trajectory online

The interplay between the trajectory generation and CDPR control merits a summarized precursor explanation for clarity. During the trajectory generation phase, the optimal control problem of tracking a desired trajectory is solved offline. The iterative Linear Quadratic Regulator (iLQR) method [36] – iterating by applying the linearized system and control law forward in time to obtain a new linearized feedback law – is used to solve the optimal control problem. The feedback law from the final pass can then be used as the online controller.

Iv-a Design Considerations

Iv-A1 Path Generation

In this work, the artist composes artworks using the shapes in the shape library as a first step towards more general artistic descriptions. Given an artist’s specification for the placements of shapes from the library of captured art, we seek to generate the paths, in the form of Bézier curves, that the spray can must follow. This is a system-level problem because it requires suitable captured data, well-planned and modeled robot capabilities, and clear artist desires. We divide path generation into (1) outline, primarily a human-robot interaction problem, (2) infill, a coverage path planning problem, and (3) travel, a problem of filling in discontinuities. The latter two are unique to our system approach because, recalling the reasoning from Section II, only outlines are captured for the shape library.

Iv-A2 Trajectory Generation

To create a physically realizeable trajectory that is as similar as possible to the artist’s vision, we first discretize the path at 100Hz to obtain a direct-from-artist trajectory, (within speed and acceleration limits), then apply an offline iLQR-based optimization to obtain a smoothed reference trajectory, , control signal, , time-varying feedback gains, , and paint timing.

Loosely inspired by [26], the iLQR-based optimization is used to strike a balance between the artist’s intent and the ease of controlling the robot. We express the iLQR problem in discretized form with time index as: —s— x, u∑_k=0^T ~x[k]^TQ~x[k] + ~u[k]^TR~u[k]x_ff, u_ff = x[k+1] = f(x[k], u[k]) x[0]=x_0 where is the deviation of the state from : the artist’s intended trajectory, is the deviation of the control from : the average of the minimum and maximum allowable torques [37, 38], and are the smoothed reference (nominal/feedforward) state and control signals, and are the state objective and control cost matrices, defines the nonlinear system dynamics, and is the initial state. The state consists of the cartesian position and velocity: , where denotes the position of the spray can’s nozzle. The control consists of the four motor torques. Section IV-C3 experimentally justifies why orientation is omitted.

Intuitively, the state objective matrix, , advocates for the artist and the control cost matrix, , penalizes being near torque limits. Interestingly, as discovered by [26], the relationship between and can also be interpreted as an artistic parameter as visually depicted in Figure 9.

Iv-A3 Control

We seek a controller that can control the cable robot to achieve motions comparable to a graffiti artist (requirements are the same as in Section III-A).

Our cable robot controller is inspired by prior works. CDPR control places emphasis on “tension distribution” (TD) which is analogous to redundancy resolution in serial manipulators [7, 37, 39, 40, 41, 42, 3, 43]. These approaches typically use or assume a feedback controller whose control variable is a task space wrench. The TD algorithm then computes the optimal motor torques (or cable tensions) required to achieve the desired task space wrench.

Fig. 9: Stylization from iLQR Q vs R also observed by [26]

However, since we are using an iLQR-based optimizer, which produces locally optimal control laws as described in Section IV-A2, most aspects of control (including TD) have been shifted offline. Our online controller is then a simple linear feedback controller. Figure 10 depicts a block diagram of our controller. Mathematically, our controller can be expressed as:


where is the 4-vector of motor torques, is the measured state, and is the 4x4 time-varying gain matrix.

Fig. 10: CDPR controller block diagram, where , , and are precomputed offline using an iLQR-based optimizer implemented in GTSAM.

To compute , we need to estimate the position and velocity of the spray can well enough to achieve our repeatability and accuracy requirements. Nonlinear least squares solvers are commonly used for CDPR state estimation, but we will show that a simpler solution is sufficient for our application.

Iv-B Approach

Iv-B1 Path Generation

When generating the outline, the largest challenge involves communicating the artist’s intent and robot’s capabilities between the artist and computer. In this work, we apply constraints to the artist when they are specifying their artistic vision. To constrain the canvas size, we use a rectangular approximation of the wrench feasible workspace (WFW) [32, 33] to define the space in which the artist may place library objects. To constrain layering specifications, we impose a strict layering of shapes such that each shape is either entirely on top of or entirely beneath another. Due to our robot’s planar limitation, we project the nozzle position for each frame from the mocap data onto the painting surface to form an ordered set of 2D line segments (allowing us to retain velocity information to be used during trajectory generation). Data sources other than our mocap library can be used but require velocity data.

To infill the shapes we apply an exact cellular decomposition and use a standard horizontal “zigzag” path within each cell [44]. Similar to the way each artist chooses a strategy for infilling based on personal preference (according to our artist collaborators), we choose this pattern for its ease of implementation and actuation: in most configurations our robot has the best control authority horizontally. Further details on the way we decompose the infill and compute the exhaustive walk to reduce nozzle actuation cycles are provided in our accompanying arXiv paper [35]. TODO(Gerry): Write details. For each object, we paint the infill in the face color then the outline in the outline color. This is in contrast to human artists who typically start with the outline in the face color, proceed to fill it in, then re-assert the outline in the outline color. The initial face color outline is usually for visual reference and our system does not have such visualization constraints so we opt to omit the initial face color outline. Instead of applying a hidden line removal algorithm [45], we simply finish painting each object before the next is started.

Finally, travel strokes must be added to make the path continuous in position. Although making the paths continuous in direction is an option (as in [20]), we opt to allow discontinuous directions but enforce continuous velocity in the trajectory generation stage. For every pair of strokes, we add a straight line from the end of the previous stroke to the start of the next stroke if they are not already coincident.

Iv-B2 Trajectory Generation

Unique to the system-level approach, we discretize outlines and infill/travel strokes differently due to the different ways the paths were generated.

For outlines, we have velocities from mocap data so we need to apply time-scaling. We compute the original path’s speed and acceleration using finite differences assuming each line segment takes 1s, then apply the linear transformation,

, to match a predefined maximum speed and/or maximum acceleration and sample from the path at 100Hz. We choose limits of 1.2m/s and 20m/s based on the spray paint dispersion described further in Section IV-C.

For the infills and travel strokes, we need to generate rest-to-rest trajectories with continuous velocities. We choose trapezoidal velocity profiles for their popularity in industrial applications [46] with limits of 0.5m/s and 20m/s (based on spray paint dispersion).

The iLQR-based optimization of (IV-A2) is performed offline using factor graphs and the GTSAM software library, but any iLQR implementation can be used. The system dynamics constraints (IV-A2) are drawn from prior works in CDPR control including the standard equations for kinematics and cable tension/wrench equilibrium [37], winch model dynamics [42, 3], rigid body dynamics [42], and numerical integration [47, 48]. The iLQR problem is then expressed as the factor graph [49, 50, 51] shown in Figure 11 and solved with the GTSAM software library using the Levenberg-Marquardt algorithm and variable elimination. The solution gives and , while the Bayes Net obtained by the final iteration’s elimination step contains the locally optimal feedback gain matrix, . Full details on the equations and factor graph are available in our accompanying arXiv paper [35].

To solve using the iLQR-based solver, we do include the orientation using is the SE(2) pose, is the twist, and is the twist acceleration. corresponds to the translation component of and, when representing , , or as vectors, we use the convention that the orientation is the first element and the translation the latter two. The iLQR problem is expressed as the factor graph in Figure 11. Due to space constraints, we refer the reader to [49, 50, 51] for an introduction to factor graphs and how they can be applied to optimal control, respectively. The equations for each of the factors is given in Table II. For the elimination order, we eliminate timesteps in the order and, within each timestep, eliminate variables in the order (the order of cable index subscripts is arbitrarily chosen to be ascending). The nonlinear factor graph is optimized (equivalent to solving with the iLQR method [51]) to obtain and . The graph is then linearized using the optimized solution as the linearization point and eliminated one final time to obtain a Bayes Net. From the resulting Bayes Net, for each timestep we obtain (among other equations):

where denotes the optimal value. By substituting , we can now obtain the control law and gain matrix for each timestep:

Fig. 11: Factor graph depicting the iLQR problem using plate notation. Circles represent variables to be solved while dots represent objectives or equations. , , and represent cable length, speed, and acceleration respectively. , , and represent cable tension, motor torque, and the wrench on the end-effector caused by cable , respectively. , , and represent end-effector pose, twist, and twist-acceleration, respectively.
Factor Name Factor Equation/Expression
Cable Tension
Control Cost
State Objective
Initial State

where is the friction, is the vector from the routing pulley location to the end effector mounting point ( in the end-effector frame), is the normalized , is the Adjoint of the transformation [52], is the inertia matrix, is the adjoint of the twist , and are the minimum and maximum allowable torques respectively (based on [37]), is the control cost matrix from iLQR, is the state objective cost matrix from iLQR, is the desired pose, and are the initial pose and twist respectively, and is the time step duration. The second-order effects for the cable acceleration kinematics were assumed to be negligible. Superscripts are omitted for TODO: for what?

TABLE II: Equations for the Factors in Figure 11

Iv-B3 Control

First we interpolate

, , and since the trajectory generation phase was discretized at 100Hz while the controller runs online at 1kHz. and are interpolated using a zero-order hold, while is interpolated using a first-order extrapolation from the most recent discrete .

To estimate position we discard redundant information and to estimate velocity we use a linear least squares solution. We discard the bottom two cable lengths then apply trigonometry using the top two cable lengths to estimate the spray can position assuming the spray can is always vertical. To estimate velocity, we solve the linear least squares problem:

where is the wrench matrix (Jacobian transpose) [42] and is the Moore-Penrose left inverse.

We also employ an offline calibration whereby a rectangular trajectory is run while collecting mocap and robot log data. A nonlinear least squares optimization is used to compute pulley locations and coefficients for cable length scaling which are hard-coded for subsequent runs.

Iv-C Results

Iv-C1 Path Generation

Fig. 12: During path generation, we first produce the outline from artist inputs (left), then infill paths (center), and finally travel strokes (right).

An example path generation result is shown in Figure 12.

Iv-C2 Trajectory Generation

The speed and acceleration limits were tuned for our painting distance of around 12cm. The outline limits of 1.2m/s and 20m/s were tested using the Montana “Skinny Cap Beige” nozzle and the infill limits of 0.5m/s and 20m/s were tested using the Montana “flat jet cap wide” nozzle. Faster speeds resulted in incomplete coverage while slower speeds resulted in “dripping”.

The offline iLQR-based optimization has complexity and runs at approximately half real-time (e.g. a 1-minute trajectory takes 2 minutes to optimize). We chose and as a balance between tracking accuracy and stability.

Iv-C3 Control

To evaluate our control stage, we use mocap for ground truth data and log and from the robot at 100Hz for a challenging 2m/s and 20m/s trajectory with sharp corners (Figure 13). The mocap and robot coordinate frames were aligned using the 4 pulley locations and the mocap data was piecewise cubic interpolated to match the 100Hz robot log frequency. The control tracking’s root mean square (RMS) error is 9.3mm and the position estimation’s is 3.4mm. We also validate our assumption that the end effector is always close to vertical, which is used both for online control and estimation, by measuring the RMS rotation deviation to be 0.45, 0.42, and 1.57 degrees in the horizontal, vertical, and normal directions respectively. We believe that our proposed controller, which precomputes linear feedback gains offline, is accurate and easier to implement and useful for CDPR applications other than graffiti as well.

Fig. 13: Setpoint , online estimate , and ground-truth positions of the spray can (top left), end effector rotation (bottom left), tracking error (top right), and estimation error (bottom right) for a challenging 2m/s, 20m/s trajectory.

In addition to the painting in Figure 1, please refer to our supplemental video for additional painting results which qualitatively demonstrate our system’s capabilities.

Iv-D Discussion & Limitations

When specifying an artist’s vision, the space of creative possibilities is immense. Non-flat overlap topologies [25], homographies, non-linear, and other 3D perspective transforms have artistic interest but are beyond the scope of this work. Graffiti stylization [26] and free-form inputs [53] are also beyond the scope of this work. In return, the artists get to explore the maxim “Creativity is born from limitations”.

Understanding the nuances of paint dispersion is another large area of study that is beyond the scope of this paper. For example, the fact that human artists consistently paint solid lines at 6m/s yet our robot’s lines begin losing complete coverage above 1.2m/s suggests a gap in our understanding. We believe moving closer to the canvas (artists were on average 3.0cm0.1cm from the painting surface vs 12cm for our robot) and modeling special effects such as flares, blending, and intentional dripping are logical next steps.

Although our controller is generally reliable and robust to modeling inaccuracies, we do find some limitations on the matrix. For large matrices (e6N/m), the controller resonates with the natural vibrations of the cable robot causing instability, while for small matrices (e2N/m), the robot gets stuck for a few cm before overcoming the friction and returning to the setpoint trajectory. Still, compared to other methods such as [37, 38], we believe the iLQR method to be easier because it requires less tuning and modeling effort.

V Conclusions & Future Work

In this paper, we presented a system for painting human-style graffiti art. Our work contributes to existing research by bridging three components in a systems approach: capturing artwork, building a graffiti robot, and planning and controlling the robot for painting graffiti. We illustrated the co-dependencies of various system design choices which suggest that future research in robot art should consider a more holistic approach. We also demonstrated that our system can successfully produce physical artworks from captured art. Our work can be applied to graffiti preservation by recreating captured artwork, to human-robot collaboration in art by enhancing the physical capabilities of artists, and to other fields through technology transfer for large-scale dynamic motion. Avenues for future work include a more portable graffiti capture device, better communication of robot limitations to the artist, style analysis and improvisation, paint dispersion analysis, real-time human-robot interaction, a larger workspace, and 6 DoF robot motion.


We thank Max Ongena and Jules Dellaert for collaborating as graffiti artists, Prajwal Vedula and Zhangqi Luo for contributing to code, and Russel Gentry, Jake Thompkins, and Tristan Al-Haddad at Georgia Tech’s Digital Fabrication Lab for their hardware assistance and allowing us to use their space for painting. This work was supported by the National Science Foundation under Grant No. 2008302.


  • [1] L. MacDowall, “In praise of 70k: Cultural heritage and graffiti style,” Continuum, vol. 20, no. 4, pp. 471–484, 2006.
  • [2] A. Liekens, Kenny, and L. Scheire, “Zet Kenny binnenkort zélf zijn eerste graffiti op een muur? — Team Scheire #9 [Will Kenny soon put his first graffiti on a wall? — Team Scheire #9],” https://www.youtube.com/watch?v=jOpgvW7aIhQ, Sept 2020.
  • [3] M. Gouttefarde, J. Lamaury, C. Reichert, and T. Bruckmann, “A versatile tension distribution algorithm for -dof parallel robots driven by cables,” IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1444–1457, 2015.
  • [4] J. M. Pagan, “Cable-suspended robot system with real time kinematics gps position correction for algae harvesting,” Ph.D. dissertation, Ohio University, 2018.
  • [5] F. Shahmiri and R. Gentry, “A survey of cable-suspended parallel robots and their applications in architecture and construction,” Blucher Design Proceedings, vol. 3, no. 1, pp. 914 – 920, 2016.
  • [6] R. Bostelman, J. Albus, N. Dagalakis, A. Jacoff, and J. Gross, “Applications of the NIST RoboCrane,” in Proceedings of the 5th International Symposium on Robotics and Manufacturing, vol. 5, 1994.
  • [7] P. Miermeister, M. Lächele, R. Boss, C. Masone, C. Schenk, J. Tesch, M. Kerger, H. Teufel, A. Pott, and H. H. Bülthoff, “The CableRobot Simulator large scale motion platform based on cable robot technology,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 3024–3029.
  • [8] J. Wilkinson, E. Roth, T. Watson, C. Sugrue, and T. Vanderlin, “#000000book.com an open database for Graffiti Markup Language (GML) files.” https://000000book.com/, 2021.
  • [9] M. Yildirim and E. Roth, “GML recording machine,” http://fffff.at/gml-recording-machine/, June 2011.
  • [10] L. Scalera, E. Mazzon, P. Gallina, and A. Gasparetto, “Airbrush robotic painting system: Experimental validation of a colour spray model,” in Advances in Service and Industrial Robotics, C. Ferraresi and G. Quaglia, Eds.   Cham: Springer International Publishing, 2018, pp. 549–556.
  • [11] D. Berio, S. Calinon, and F. F. Leymarie, “Learning dynamic graffiti strokes with a compliant robot,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 3981–3986.
  • [12] L. Scalera, S. Seriani, A. Gasparetto, and P. Gallina, “Watercolour robotic painting: a novel automatic system for artistic rendering,” Journal of Intelligent and Robotic Systems, pp. 1–16, 2018.
  • [13] N. Roy, “Graffiti robot - train writing - planet256 / Niklas Roy / the fly - wall printer style machine,” https://www.youtube.com/watch?v=zSIdvQsu27s&t=3s&ab˙channel=ARTESANOBERLIN.
  • [14] Y. Jun, G. Jang, B.-K. Cho, J. Trubatch, I. Kim, S.-D. Seo, and P. Y. Oh, “A humanoid doing an artistic work - graffiti on the wall,” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1538–1543, 2016.
  • [15] A. Uryasheva, M. Kulbeda, N. Rodichenko, and D. Tsetserukou, “DroneGraffiti: Autonomous multi-UAV spray painting,” in ACM SIGGRAPH 2019 Studio, ser. SIGGRAPH ’19.   New York, NY, USA: Association for Computing Machinery, 2019. [Online]. Available: https://doi.org/10.1145/3306306.3328000
  • [16] A. S. Vempati, M. Kamel, N. Stilinovic, Q. Zhang, D. Reusser, I. Sa, J. Nieto, R. Siegwart, and P. Beardsley, “PaintCopter: An autonomous UAV for spray painting on three-dimensional surfaces,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 2862–2869, 2018.
  • [17] B. Galea, E. Kia, N. Aird, and P. G. Kry, “Stippling with aerial robots,” in Computational Aesthetics (Expressive 2016), 2016, p. 10 pages.
  • [18] TsaruRobotics, “Autonomous mural for sprite Ukraine.” https://tsuru.su/en/project/spritemural/. [Online]. Available: https://tsuru.su/en/project/spritemural/
  • [19] “Albert robot muralist,” https://www.robotmuralist.com/albert. [Online]. Available: https://www.robotmuralist.com/albert
  • [20] J. Lehni, “Soft monsters,” Perspecta, vol. 40, pp. 22–27, 2008. [Online]. Available: http://www.jstor.org/stable/40482274
  • [21] Y. Chen, W. Chen, B. Li, G. Zhang, and W. Zhang, “Paint thickness simulation for painting robot trajectory planning: a review,” Industrial Robot: An International Journal, vol. 44, no. 5, pp. 629–638, 2021/09/04 2017. [Online]. Available: https://doi.org/10.1108/IR-07-2016-0205
  • [22] H. Chen, T. Fuhlbrigge, and X. Li, “A review of CAD‐based robot path planning for spray painting,” Industrial Robot: An International Journal, vol. 36, no. 1, pp. 45–50, 2021/09/04 2009. [Online]. Available: https://doi.org/10.1108/01439910910924666
  • [23] M. V. Andulkar, S. S. Chiddarwar, and A. S. Marathe, “Novel integrated offline trajectory generation approach for robot assisted spray painting operation,” Journal of Manufacturing Systems, vol. 37, pp. 201–216, 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0278612515000229
  • [24] Y. Zeng, J. Gong, N. Xu, and N. Wu, “Tool trajectory optimization of spray painting robot for manytimes spray painting,” International Journal of Control and Automation, vol. 7, pp. 193–208, 08 2014.
  • [25] D. Berio, P. Asente, J. Echevarria, and F. F. Leymarie, “Sketching and layering graffiti primitives,” in 8th ACM/Eurographics Expressive Symposium, Expressive 2019, Genoa, Italy, May 5-6, 2019, Proceedings, C. S. Kaplan, A. G. Forbes, and S. DiVerdi, Eds.   Eurographics Association, 2019, pp. 51–59. [Online]. Available: https://doi.org/10.2312/exp.20191076
  • [26] D. Berio, S. Calinon, and F. F. Leymarie, “Dynamic graffiti stylisation with stochastic optimal control,” in Proceedings of the 4th International Conference on Movement Computing.   ACM, 2017, p. 18.
  • [27] D. Berio and F. F. Leymarie, “Computational models for the analysis and synthesis of graffiti tag strokes,” in Proceedings of the Workshop on Computational Aesthetics.   Eurographics Association, 2015, pp. 35–47.
  • [28] L. DELLA VALLE, T. G. ANDREWS, and S. ROSS, “Perceptual thresholds of curvilinearity and angularity as functions of line length.” J Exp Psychol, vol. 51, no. 5, pp. 343–347, May 1956.
  • [29] R. Nan, D. Li, C. Jin, Q. Wang, L. Zhu, W. Zhu, H. Zhang, Y. Yue, and L. Qian, “The five-hundred-meter aperture spherical radio telescope (FAST) project,” International Journal of Modern Physics D, vol. 20, no. 06, pp. 989–1024, 2011.
  • [30] S. Bandyopadhyay, “Lunar crater radio telescope (LCRT) on the far-side of the moon,” April 2020. [Online]. Available: https://www.nasa.gov/directorates/spacetech/niac/2020%5FPhase%5FI%5FPhase%5FII/lunar%5Fcrater%5Fradio%5Ftelescope/
  • [31] P. Bosscher, A. T. Riechel, and I. Ebert-Uphoff, “Wrench-feasible workspace generation for cable-driven robots,” IEEE Transactions on Robotics, vol. 22, no. 5, pp. 890–902, 2006.
  • [32] S. Bouchard, C. Gosselin, and B. Moore, “On the ability of a cable-driven robot to generate a prescribed set of wrenches,” Journal of Mechanisms and Robotics, vol. 2, no. 1, 2/15/2021 2009.
  • [33] M. Gouttefarde, D. Daney, and J. Merlet, “Interval-analysis-based determination of the wrench-feasible workspace of parallel cable-driven robots,” IEEE Transactions on Robotics, vol. 27, no. 1, pp. 1–13, 2011.
  • [34] A. Liekens, “Spray can servo mount,” https://www.thingiverse.com/thing:4622176, Oct 2020.
  • [35] G. Chen, S. Baek, J.-D. Florez, W. Qian, S. won Leigh, S. Hutchinson, and F. Dellaert, “Extended version of GTGraffiti: Spray painting graffiti art from human painting motions with a cable driven parallel robot,” 2021, arXiv:2109.06238 [cs.RO].
  • [36] E. Todorov and W. Li, “A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems,” in Proceedings of the 2005, American Control Conference, 2005., 2005, pp. 300–306.
  • [37] A. Pott, T. Bruckmann, and L. Mikelsons, “Closed-form force distribution for parallel wire robots,” in Computational Kinematics, A. Kecskeméthy and A. Müller, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 25–34.
  • [38] P. Miermeister, A. Pott, and A. Verl, “Auto-calibration method for overconstrained cable-driven parallel robots,” in ROBOTIK 2012; 7th German Conference on Robotics, 2012, pp. 1–6.
  • [39] M. Agahi and L. Notash, “Redundancy resolution of wire-actuated parallel manipulators,” Transactions of the Canadian Society for Mechanical Engineering, vol. 33, no. 4, pp. 561–573, 2009.
  • [40] M. Hassan and A. Khajepour, “Analysis of bounded cable tensions in cable-actuated parallel manipulators,” IEEE Transactions on Robotics, vol. 27, no. 5, pp. 891–900, 2011.
  • [41] H. D. Taghirad and Y. B. Bedoustani, “An analytic-iterative redundancy resolution scheme for cable-driven redundant parallel manipulators,” IEEE Transactions on Robotics, vol. 27, no. 6, pp. 1137–1143, 2011.
  • [42] J. Lamaury and M. Gouttefarde, “Control of a large redundantly actuated cable-suspended parallel robot,” in 2013 IEEE International Conference on Robotics and Automation, 2013, pp. 4659–4664.
  • [43] W. Shang, B. Zhang, S. Cong, and Y. Lou, “Dual-space adaptive synchronization control of redundantly-actuated cable-driven parallel robots,” Mechanism and Machine Theory, vol. 152, p. 103954, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0094114X20301750
  • [44] E. Galceran and M. Carreras, “A survey on coverage path planning for robotics,” Robotics and Autonomous Systems, vol. 61, no. 12, pp. 1258–1276, 2013. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S092188901300167X
  • [45] F. Devai, “Quadratic bounds for hidden line elimination,” in Proceedings of the Second Annual Symposium on Computational Geometry, ser. SCG ’86.   New York, NY, USA: Association for Computing Machinery, 1986, pp. 269–275. [Online]. Available: https://doi.org/10.1145/10515.10544
  • [46] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: modelling, planning and control.   Springer Science & Business Media, 2010, ch. 4.
  • [47] D. Lau, J. Eden, Y. Tan, and D. Oetomo, “CASPR: A comprehensive cable-robot analysis and simulation platform for the research of cable-driven parallel robots,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016.
  • [48] J. C. Butcher, Numerical Differential Equation Methods.   John Wiley & Sons, Ltd, 2016, ch. 2, pp. 55–142. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781119121534.ch2
  • [49] F. Dellaert and M. Kaess, “Factor graphs for robot perception,” Foundations and Trends in Robotics, vol. 6, pp. 1–139, 2017.
  • [50] S. Yang, G. Chen, Y. Zhang, F. Dellaert, and H. Choset, “Equality constrained linear optimal control with factor graphs,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021.
  • [51] G. Chen and Y. Zhang, “LQR control using factor graphs,” https://gtsam.org/2019/11/07/lqr-control.html, Nov 2019, note: Superceding ICRA 2021 paper pending review.
  • [52] K. Lynch and F. Park, Modern Robotics: Mechanics, Planning, and Control.   Cambridge Univeristy Press, 2017.
  • [53] D. Berio, F. F. Leymarie, and R. Plamondon, “Expressive Curve Editing with the Sigma Lognormal Model,” Eurographics 2018 - Short Papers, 2018.