ECAT: Event Capture Annotation Tool

10/05/2016
by   Tuan Do, et al.
0

This paper introduces the Event Capture Annotation Tool (ECAT), a user-friendly, open-source interface tool for annotating events and their participants in video, capable of extracting the 3D positions and orientations of objects in video captured by Microsoft's Kinect(R) hardware. The modeling language VoxML (Pustejovsky and Krishnaswamy, 2016) underlies ECAT's object, program, and attribute representations, although ECAT uses its own spec for explicit labeling of motion instances. The demonstration will show the tool's workflow and the options available for capturing event-participant relations and browsing visual data. Mapping ECAT's output to VoxML will also be addressed.

READ FULL TEXT
research
01/01/2023

FEVA: Fast Event Video Annotation Tool

Video Annotation is a crucial process in computer science and social sci...
research
09/14/2021

Cross-document Event Identity via Dense Annotation

In this paper, we study the identity of textual events from different do...
research
09/20/2018

Rapid Customization for Event Extraction

We present a system for rapidly customizing event extraction capability ...
research
09/06/2021

SENSATION: An Authoring Tool to Support Event-State Paradigm in End-User Development

In this paper, we present the design and the evaluation of an authoring ...
research
06/17/2023

A New Perspective for Shuttlecock Hitting Event Detection

This article introduces a novel approach to shuttlecock hitting event de...
research
08/12/2016

Extracting Biological Pathway Models From NLP Event Representations

This paper describes an an open-source software system for the automatic...
research
07/04/2023

A Prototype for a Controlled and Valid RDF Data Production Using SHACL

The paper introduces a tool prototype that combines SHACL's capabilities...

Please sign up or login with your details

Forgot password? Click here to reset