-
Quantized deep learning models on low-power edge devices for robotic systems
In this work, we present a quantized deep neural network deployed on a l...
read it
-
Scheduling Real-time Deep Learning Services as Imprecise Computations
The paper presents an efficient real-time scheduling algorithm for intel...
read it
-
Communication-Efficient Edge AI: Algorithms and Systems
Artificial intelligence (AI) has achieved remarkable breakthroughs in a ...
read it
-
Democratizing Production-Scale Distributed Deep Learning
The interest and demand for training deep neural networks have been expe...
read it
-
Deep Learning-Based Multiple Object Visual Tracking on Embedded System for IoT and Mobile Edge Computing Applications
Compute and memory demands of state-of-the-art deep learning methods are...
read it
-
TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems
Deep learning inference on embedded devices is a burgeoning field with m...
read it
-
NNStreamer: Stream Processing Paradigm for Neural Networks, Toward Efficient Development and Execution of On-Device AI Applications
We propose nnstreamer, a software system that handles neural networks as...
read it
TinyML for Ubiquitous Edge AI
TinyML is a fast-growing multidisciplinary field at the intersection of machine learning, hardware, and software, that focuses on enabling deep learning algorithms on embedded (microcontroller powered) devices operating at extremely low power range (mW range and below). TinyML addresses the challenges in designing power-efficient, compact deep neural network models, supporting software framework, and embedded hardware that will enable a wide range of customized, ubiquitous inference applications on battery-operated, resource-constrained devices. In this report, we discuss the major challenges and technological enablers that direct this field's expansion. TinyML will open the door to the new types of edge services and applications that do not rely on cloud processing but thrive on distributed edge inference and autonomous reasoning.
READ FULL TEXT
Comments
There are no comments yet.