Batteries, camera, action! Learning a semantic control space for expressive robot cinematography

11/19/2020
by   Rogerio Bonatti, et al.
0

Aerial vehicles are revolutionizing the way film-makers can capture shots of actors by composing novel aerial and dynamic viewpoints. However, despite great advancements in autonomous flight technology, generating expressive camera behaviors is still a challenge and requires non-technical users to edit a large number of unintuitive control parameters. In this work, we develop a data-driven framework that enables editing of these complex camera positioning parameters in a semantic space (e.g. calm, enjoyable, establishing). First, we generate a database of video clips with a diverse range of shots in a photo-realistic simulator, and use hundreds of participants in a crowd-sourcing framework to obtain scores for a set of semantic descriptors for each clip. Next, we analyze correlations between descriptors and build a semantic control space based on cinematography guidelines and human perception studies. Finally, we learn a generative model that can map a set of desired semantic video descriptors into low-level camera trajectory parameters. We evaluate our system by demonstrating that our model successfully generates shots that are rated by participants as having the expected degrees of expression for each descriptor. We also show that our models generalize to different scenes in both simulation and real-world experiments. Data and video found at: https://sites.google.com/view/robotcam.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

research
08/24/2023

MapPrior: Bird's-Eye View Map Layout Estimation with Generative Models

Despite tremendous advancements in bird's-eye view (BEV) perception, exi...
research
08/13/2022

UAV-CROWD: Violent and non-violent crowd activity simulator from the perspective of UAV

Unmanned Aerial Vehicle (UAV) has gained significant traction in the rec...
research
11/26/2022

SliceMatch: Geometry-guided Aggregation for Cross-View Pose Estimation

This work addresses cross-view camera pose estimation, i.e., determining...
research
08/09/2021

3D Human Reconstruction in the Wild with Collaborative Aerial Cameras

Aerial vehicles are revolutionizing applications that require capturing ...
research
05/20/2017

Responsive Action-based Video Synthesis

We propose technology to enable a new medium of expression, where video ...
research
04/08/2021

CineMPC: Controlling Camera Intrinsics and Extrinsics for Autonomous Cinematography

We present CineMPC, an algorithm to autonomously control a UAV-borne vid...

Please sign up or login with your details

Forgot password? Click here to reset