Reducing numerical precision preserves classification accuracy in Mondrian Forests

06/28/2021
by   Marc Vicuna, et al.
0

Mondrian Forests are a powerful data stream classification method, but their large memory footprint makes them ill-suited for low-resource platforms such as connected objects. We explored using reduced-precision floating-point representations to lower memory consumption and evaluated its effect on classification performance. We applied the Mondrian Forest implementation provided by OrpailleCC, a C++ collection of data stream algorithms, to two canonical datasets in human activity recognition: Recofit and Banos et al. Results show that the precision of floating-point values used by tree nodes can be reduced from 64 bits to 8 bits with no significant difference in F1 score. In some cases, reduced precision was shown to improve classification performance, presumably due to its regularization effect. We conclude that numerical precision is a relevant hyperparameter in the Mondrian Forest, and that commonly-used double precision values may not be necessary for optimal performance. Future work will evaluate the generalizability of these findings to other data stream classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2020

A benchmark of data stream classification for human activity recognition on connected objects

This paper evaluates data stream classifiers from the perspective of con...
research
06/20/2017

Improving text classification with vectors of reduced precision

This paper presents the analysis of the impact of a floating-point numbe...
research
12/13/2022

Numerical Stability of DeepGOPlus Inference

Convolutional neural networks (CNNs) are currently among the most widely...
research
02/04/2021

EFloat: Entropy-coded Floating Point Format for Deep Learning

We describe the EFloat floating-point number format with 4 to 6 addition...
research
11/06/2020

Low-Cost Floating-Point Processing in ReRAM for Scientific Computing

We propose ReFloat, a principled approach for low-cost floating-point pr...
research
11/17/2015

Reduced-Precision Strategies for Bounded Memory in Deep Neural Nets

This work investigates how using reduced precision data in Convolutional...
research
07/03/2016

Understanding the Energy and Precision Requirements for Online Learning

It is well-known that the precision of data, hyperparameters, and intern...

Please sign up or login with your details

Forgot password? Click here to reset