Muscles in Action

12/05/2022
by   Mia Chiquier, et al.
0

Small differences in a person's motion can engage drastically different muscles. While most visual representations of human activity are trained from video, people learn from multimodal experiences, including from the proprioception of their own muscles. We present a new visual perception task and dataset to model muscle activation in human activities from monocular video. Our Muscles in Action (MIA) dataset consists of 2 hours of synchronized video and surface electromyography (sEMG) data of subjects performing various exercises. Using this dataset, we learn visual representations that are predictive of muscle activation from monocular video. We present several models, including a transformer model, and measure their ability to generalize to new exercises and subjects. Putting muscles into computer vision systems will enable richer models of virtual humans, with applications in sports, fitness, and AR/VR.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 7

page 8

page 11

page 12

research
10/04/2021

How You Move Your Head Tells What You Do: Self-supervised Video Representation Learning with Egocentric Cameras and IMU Sensors

Understanding users' activities from head-mounted cameras is a fundament...
research
03/13/2018

Video Based Reconstruction of 3D People Models

This paper describes how to obtain accurate 3D body models and texture o...
research
06/20/2014

Predicting Motivations of Actions by Leveraging Text

Understanding human actions is a key problem in computer vision. However...
research
12/02/2021

Overcoming the Domain Gap in Neural Action Representations

Relating animal behaviors to brain activity is a fundamental goal in neu...
research
06/27/2021

Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

This paper introduces a new video-and-language dataset with human action...
research
12/20/2020

High-Fidelity Neural Human Motion Transfer from Monocular Video

Video-based human motion transfer creates video animations of humans fol...
research
12/02/2020

MEVA: A Large-Scale Multiview, Multimodal Video Dataset for Activity Detection

We present the Multiview Extended Video with Activities (MEVA) dataset, ...

Please sign up or login with your details

Forgot password? Click here to reset