Query Time Optimized Deep Learning Based Video Inference System

12/13/2022
by   Mingren Shen, et al.
0

This is a project report about how we tune Focus[1], a video inference system that provides low cost and low latency, through two phases. In this report, we will decrease the query time by saving the middle layer output of the neural network. This is a trade-off strategy that involves using more space to save time. We show how this scheme works using prototype systems, and it saves 20 of the time. The code repository URL is here, https://github.com/iphyer/CS744 FocousIngestOpt.

READ FULL TEXT
research
01/10/2018

Focus: Querying Large Video Datasets with Low Latency and Low Cost

Large volumes of videos are continuously recorded from cameras deployed ...
research
11/28/2022

DeepAngle: Fast calculation of contact angles in tomography images using deep learning

DeepAngle is a machine learning-based method to determine the contact an...
research
01/04/2020

SurveilEdge: Real-time Video Query based on Collaborative Cloud-Edge Deep Learning

The real-time query of massive surveillance video data plays a fundament...
research
03/12/2020

Top-1 Solution of Multi-Moments in Time Challenge 2019

In this technical report, we briefly introduce the solutions of our team...
research
06/08/2020

Metro-haul Project Vertical Service Demo: Video Surveillance Real-time Low-latency Object Tracking

We report on the EU H2020 project METRO-HAUL use-case demonstration, inc...
research
05/06/2021

PLSM: A Parallelized Liquid State Machine for Unintentional Action Detection

Reservoir Computing (RC) offers a viable option to deploy AI algorithms ...

Please sign up or login with your details

Forgot password? Click here to reset