DeepAI AI Chat
Log In Sign Up

Designing Voice-Controllable APIs

by   Matúš Sulír, et al.

The main purpose of a voice command system is to process a sentence in natural language and perform the corresponding action. Although there exist many approaches to map sentences to API (application programming interface) calls, this mapping is usually performed after the API is already implemented, possibly by other programmers. In this paper, we describe how the API developer can use patterns to map sentences to API calls by utilizing the similarities between names and types in the sentences and the API. In the cases when the mapping is not straightforward, we suggest the usage of suitable annotations (attribute-oriented programming).


page 1

page 2

page 3

page 4


ARCLIN: Automated API Mention Resolution for Unformatted Texts

Online technical forums (e.g., StackOverflow) are popular platforms for ...

OWLOOP: A Modular API to Describe OWL Axioms in OOP Objects Hierarchies

OWLOOP is an Application Programming Interface (API) for using the Ontol...

Measuring and Mitigating Constraint Violations of In-Context Learning for Utterance-to-API Semantic Parsing

In executable task-oriented semantic parsing, the system aims to transla...

Programming Bots by Synthesizing Natural Language Expressions into API Invocations

At present, bots are still in their preliminary stages of development. M...

CAMLroot: revisiting the OCaml FFI

The OCaml language comes with a facility for interfacing with C code -- ...

The NLTK FrameNet API: Designing for Discoverability with a Rich Linguistic Resource

A new Python API, integrated within the NLTK suite, offers access to the...