SEMBED: Semantic Embedding of Egocentric Action Videos

07/28/2016
by   Michael Wray, et al.
0

We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of 1225 freely annotated egocentric videos, outperforming SVM classification by more than 5

READ FULL TEXT

page 2

page 11

research
08/14/2020

ConsNet: Learning Consistency Graph for Zero-Shot Human-Object Interaction Detection

We consider the problem of Human-Object Interaction (HOI) Detection, whi...
research
12/10/2020

Developing Motion Code Embedding for Action Recognition in Videos

In this work, we propose a motion embedding strategy known as motion cod...
research
04/10/2021

Object Priors for Classifying and Localizing Unseen Actions

This work strives for the classification and localization of human actio...
research
12/04/2017

3D Semantic Trajectory Reconstruction from 3D Pixel Continuum

This paper presents a method to reconstruct dense semantic trajectory st...
research
03/08/2017

Deep Variation-structured Reinforcement Learning for Visual Relationship and Attribute Detection

Despite progress in visual perception tasks such as image classification...
research
09/16/2013

Visual-Semantic Scene Understanding by Sharing Labels in a Context Network

We consider the problem of naming objects in complex, natural scenes con...
research
12/06/2017

From Lifestyle Vlogs to Everyday Interactions

A major stumbling block to progress in understanding basic human interac...

Please sign up or login with your details

Forgot password? Click here to reset