Multimodal Semantic Simulations of Linguistically Underspecified Motion Events

10/03/2016
by   Nikhil Krishnaswamy, et al.
0

In this paper, we describe a system for generating three-dimensional visual simulations of natural language motion expressions. We use a rich formal model of events and their participants to generate simulations that satisfy the minimal constraints entailed by the associated utterance, relying on semantic knowledge of physical objects and motion events. This paper outlines technical considerations and discusses implementing the aforementioned semantic models into such a system.

READ FULL TEXT
research
10/06/2016

Generating Simulations of Motion Events from Verbal Descriptions

In this paper, we describe a computational model for motion events in na...
research
10/05/2016

VoxML: A Visualization Modeling Language

We present the specification for a modeling language, VoxML, which encod...
research
03/01/2016

Event Search and Analytics: Detecting Events in Semantically Annotated Corpora for Search and Analytics

In this article, I present the questions that I seek to answer in my PhD...
research
05/22/2023

An Abstract Specification of VoxML as an Annotation Language

VoxML is a modeling language used to map natural language expressions in...
research
06/01/2011

Grounding the Lexical Semantics of Verbs in Visual Perception using Force Dynamics and Event Logic

This paper presents an implemented system for recognizing the occurrence...
research
01/28/2023

Underwater Robotics Semantic Parser Assistant

Semantic parsing is a means of taking natural language and putting it in...
research
03/16/2020

A Formal Analysis of Multimodal Referring Strategies Under Common Ground

In this paper, we present an analysis of computationally generated mixed...

Please sign up or login with your details

Forgot password? Click here to reset