AMSER: Adaptive Multi-modal Sensing for Energy Efficient and Resilient eHealth Systems

12/13/2021
by   Emad Kasaeyan Naeini, et al.
16

eHealth systems deliver critical digital healthcare and wellness services for users by continuously monitoring physiological and contextual data. eHealth applications use multi-modal machine learning kernels to analyze data from different sensor modalities and automate decision-making. Noisy inputs and motion artifacts during sensory data acquisition affect the i) prediction accuracy and resilience of eHealth services and ii) energy efficiency in processing garbage data. Monitoring raw sensory inputs to identify and drop data and features from noisy modalities can improve prediction accuracy and energy efficiency. We propose a closed-loop monitoring and control framework for multi-modal eHealth applications, AMSER, that can mitigate garbage-in garbage-out by i) monitoring input modalities, ii) analyzing raw input to selectively drop noisy data and features, and iii) choosing appropriate machine learning models that fit the configured data and feature vector - to improve prediction accuracy and energy efficiency. We evaluate our AMSER approach using multi-modal eHealth applications of pain assessment and stress monitoring over different levels and types of noisy components incurred via different sensor modalities. Our approach achieves up to 22% improvement in prediction accuracy and 5.6× energy consumption reduction in the sensing phase against the state-of-the-art multi-modal monitoring application.

READ FULL TEXT

page 1

page 2

research
04/06/2018

A Hardware Platform for Efficient Multi-Modal Sensing with Adaptive Approximation

We present Warp, a hardware platform to support research in approximate ...
research
05/11/2021

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

Multi-modal learning, which focuses on utilizing various modalities to i...
research
07/22/2022

Uncertainty-aware Multi-modal Learning via Cross-modal Random Network Prediction

Multi-modal learning focuses on training models by equally combining mul...
research
08/11/2018

SmartEAR: Smartwatch-based Unsupervised Learning for Multi-modal Signal Analysis in Opportunistic Sensing Framework

Wrist-bands such as smartwatches have become an unobtrusive interface fo...
research
03/06/2020

PAS: Prediction-based Adaptive Sleeping for Environment Monitoring in Sensor Networks

Energy efficiency has proven to be an important factor dominating the wo...
research
08/04/2022

Edge-centric Optimization of Multi-modal ML-driven eHealth Applications

Smart eHealth applications deliver personalized and preventive digital h...
research
10/23/2019

POSE.R: Prediction-based Opportunistic Sensing for Resilient and Efficient Sensor Networks

The paper presents a distributed algorithm, called Prediction-based Oppo...

Please sign up or login with your details

Forgot password? Click here to reset