Memory Vulnerability: A Case for Delaying Error Reporting

by   Luc Jaulmes, et al.

To face future reliability challenges, it is necessary to quantify the risk of error in any part of a computing system. To this goal, the Architectural Vulnerability Factor (AVF) has long been used for chips. However, this metric is used for offline characterisation, which is inappropriate for memory. We survey the literature and formalise one of the metrics used, the Memory Vulnerability Factor, and extend it to take into account false errors. These are reported errors which would have no impact on the program if they were ignored. We measure the False Error Aware MVF (FEA) and related metrics precisely in a cycle-accurate simulator, and compare them with the effects of injecting faults in a program's data, in native parallel runs. Our findings show that MVF and FEA are the only two metrics that are safe to use at runtime, as they both consistently give an upper bound on the probability of incorrect program outcome. FEA gives a tighter bound than MVF, and is the metric that correlates best with the incorrect outcome probability of all considered metrics.



page 1

page 2

page 3

page 4


MCPA: Program Analysis as Machine Learning

Static program analysis today takes an analytical approach which is quit...

Visilence: An Interactive Visualization Tool for Error Resilience Analysis

Soft errors have become one of the major concerns for HPC applications, ...

Deep Learning based Vulnerability Detection: Are We There Yet?

Automated detection of software vulnerabilities is a fundamental problem...

Read Disturb Errors in MLC NAND Flash Memory

This paper summarizes our work on experimentally characterizing, mitigat...

A Comparative Study of Vulnerability Reporting by Software Composition Analysis Tools

Background: Modern software uses many third-party libraries and framewor...

Public Release and Validation of SPEC CPU2017 PinPoints

Phase-based statistical sampling methods such as SimPoints have proven t...

Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics

We demonstrate how a target model's generalization gap leads directly to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.