Deep Convolutional Autoencoder for Assessment of Anomalies in Multi-stream Sensor Data

A fully convolutional autoencoder is developed for the detection of anomalies in multi-sensor vehicle drive-cycle data from the powertrain domain. Preliminary results collected on real-world powertrain data show that the reconstruction error of faulty drive cycles deviates significantly relative to the reconstruction of healthy drive cycles using the trained autoencoder. The results demonstrate applicability for identifying faulty drive-cycles, and for improving the accuracy of system prognosis and predictive maintenance in connected vehicles.


Unsupervised detection of mouse behavioural anomalies using two-stream convolutional autoencoders

This paper explores the application of unsupervised learning to detectin...

AutoEncoder for Interpolation

In physical science, sensor data are collected over time to produce time...

Analysis of NARXNN for State of Charge Estimation for Li-ion Batteries on various Drive Cycles

Electric Vehicles (EVs) are rapidly increasing in popularity as they are...

Bayesian Autoencoders for Drift Detection in Industrial Environments

Autoencoders are unsupervised models which have been used for detecting ...

Vision-Based High Speed Driving with a Deep Dynamic Observer

In this paper we present a framework for combining deep learning-based r...

Identifying and Categorizing Anomalies in Retinal Imaging Data

The identification and quantification of markers in medical images is cr...

Detection of Accounting Anomalies in the Latent Space using Adversarial Autoencoder Neural Networks

The detection of fraud in accounting data is a long-standing challenge i...

I Introduction

The need for deeper analysis and monitoring of in-vehicle embedded systems data is driven by the growth of autonomous vehicles, cloud connectivity, and other advanced vehicle functionality. The application of neural networks to deepen the analysis of vehicle sensor data has advanced functionality in areas such as autonomous driving security

[1, 2], driver behavior analysis [3, 4], fuel consumption and efficiency [5, 6]

, and computer vision and object detection

[7, 8]. For vehicle manufacturers, computational intelligence is also applied to improve quality in areas such as production, logistics, and vehicle fleet management [9].

Deep learning can also be used to improve the detection of abnormalities in a vehicle’s sensor signals for the prediction of system faults or failures. Existing diagnostics of on-board systems are typically triggered from a limited sensor domain such as the emissions-related ECU communication protocol required by ISO standards 9141 and 15031 [10, 11]. A less prescriptive more holistic vehicle health assessment based on a wide range of sensor data may extend on-board diagnostics to additional vehicle components. Literature reporting enhanced on-board diagnostics using sensor data analysis is limited due to the complex and proprietary nature of in vehicle embedded systems communications [12].

The focus of this work is on detecting rare and abnormal temporal patterns in a multi-sensor time-series data set collected from connected vehicles during drive cycle tests. The system performance metric is based on the evaluation of the reconstructed multi-sensor signal matrix by a trained deep neural network encoder-decoder, i.e., an autoencoder. Data are composed of powertrain-related sensor signals from several drive-cycles. A successful anomaly detector that can differentiate normal and abnormal drive cycle samples (samples performed with some drivetrain component fault) may have practical applications for the prediction of failure, for predictive maintenance, and to improve durability testing by manufacturers

Here, we propose a fully convolutional autoencoder that is able to show the capability of distinguishing anomalous conditions from that of normal operating conditions based on high-dimensional, temporal signals from the vehicle’s embedded system. We compare this to three other anomaly detection algorithms, which are unable to distinguish the anomalous conditions from that of the normal drive-cycle data.

I-a Related work

Traditional approaches to pattern recognition and signal processing of vehicle systems have been employed for fault detection and diagnostics with some degree of accuracy


. The usage of autoencoders for the detection of abnormal data is a textbook method for outlier detection because of a demonstrated effectiveness in dimensionality reduction over alternatives such as

principle component analysis (PCA) or matrix factorization [14].

Establishing thresholds and interpreting anomalies remains a unique challenge presented by the nature of the multivariate time-series being studied. Some automatic threshold and anomaly classification techniques have been proposed [15, 16]

. Although these address a similar objective, these methods tend to classify observation level events, such as single time-step anomalies in the time-series. The temporal patterns in drive cycles provide much more information to characterize normal versus abnormal events, with possibly very little information in a single observation. This is shown by the results testing the

Python outlier detection (PyOD) methods [16] on the drive cycle data in Figure 4

. These methods assign an anomaly score at the observation level, and the aggregated totals are used to evaluate outlier data points relative to a training data set.

Denoising autoencoders can be enhanced by utilizing PCA to eliminate outliers and noise without access to any clean training data[17]. While dimensionality reduction and a denoising component could potentially improve accuracy and remain a consideration for future work, it is not directly applicable to the present model which demonstrates success with minimal reductions to the input data.

Solutions for further characterization of rare events have been addressed using a semi-supervised approach, maximizing use of labels when available for small subsets of the data [18]. The scope of this work is focused on characterization of normal and abnormal data; however, given the availability of drive cycles with various classification of system faults, similar semi-supervised approaches could be considered to characterize or differentiate such events in future work. In this work, we assume that only normal observations are available for training the algorithm.

Novel architectures of autoencoders and temporal data representations through feature transformation for outlier detection have been proposed. A 2019 paper uses a multi-scale convolutional recurrent encoder-decoder (MSCRED) [19] using an approach that sub-samples the transformed inner product of sliding windows to maximize the representation of cross-sensor correlations, coupled with attention-based convolutional long-short-term memory (LSTM) networks to capture temporal patterns. The authors propose this as a method which incorporates time dependence, improves noise robustness, and interprets anomaly severity. Although shown to be successful in some use cases, the inner product representations seem to depend on the data composition of continuous and highly correlated features. The transformed inner product representations of drive cycle data lack such a correlation, prohibiting the network from improving the reconstruction through training. Testing indicates that the sliding window approach without feature transformation is more suited to the drive-cycle data, which is composed of uncorrelated features, highly variable ranges, and some non-continuous features.

I-B Clustering-based approaches

Notable approaches to anomaly detection in multivariate time series include clustering based methods using traditional proximity-based methods and Fuzzy C-Means (FCM) clustering [20, 21, 14, 16]. One approach proposes a clustering-based anomaly detector with specific attention to the amplitude and the shape of multivariate time series [21]

. The method employs sliding window sub-sampling, FCM clustering, and a reconstruction criteria to calculate an anomaly score. The authors then employ particle swarm optimization to improve detection. The clustering-based approach has the benefit of being less reliant on prior training data; however, drawbacks are high time and space complexity in testing. While extensive training may be required with our proposed autoencoder, validation and testing can be performed rapidly.

Ii Drive Cycles Data Set

The drive cycles are composed of 58 temporal embedded system channels of the powertrain domain collected from connected hybrid-electric vehicle tests. The data features are broadly categorized in Table I. The data are composed mostly of continuous signals variables such as torque, RPM, and speed, with some discrete sensor signals such as the PRNDL state and brake status. Raw data are composed of several drive-cycles grouped into two categories: a) drive-cycle data collected with new batteries and normally operating powertrain components (data used for training and validation), and b) drive-cycles containing older batteries with faulted battery connectivity and otherwise normal functioning powertrain (data used for testing). Both of the categories are comprised of diverse types and combinations of drive-cycles that simulate different driving behaviors as to increase the robustness of the the trained autoencoder to different driving patterns. There are 271 healthy drive-cycles and 150 faulted drive cycles. The median drive-cycle length is , 0.1 sec time-steps—i.e., 1,400 sec total length—observations. For an approximately unbiased representation, the drive-cycles are randomly sub-sampled with a 128 time-step sliding window and 64 samples are cropped from each drive cycle for each mini-batch. The data is min-max normalized by dividing each feature of the data set by a known maximum value which is provided by domain experts.

Feature Category Powertrain Components*
Torque Engine, electric motor, gearbox, and driver torque request
RPM Engine, motor, gearbox
Electric Power Battery state of charge, voltage, current, temperature
Drive State PRNDL state, vehicle speed, brake status
  • not all powertrain components listed

Table I: Summarized drive cycle data set feature categories and components

Iii Method

An unsupervised approach for detection of abnormal vehicle data is employed because there are typically no drive-cycle data points with labeled or known problems; rather, data collection was performed on new powertrains and approximately 3-year old powertrains with some issue that warranted a battery replacement. The objective of the experiment is to distinguish the old battery cycles from new battery cycles, and the assumption goes that if the older powertrain data can be distinguished from new properly running powertrains, then other abnormalities in drive-cycles can be distinguished using the same approach.

The temporal patterns and complex signal relationships in the powertrain data do not exhibit obvious differences between normal and abnormal cycles, therefore traditional angle or distance based methods of detection such as angle-based anomaly detector

(ABOD) and KNN—availabled in the PyOD library—are less successful. The autoencoder is validated and refined by regeneration of the input of normal drive cycles over numerous epochs of parameter optimization in the deep neural network. The network is then tested on a held out data set of normal drive cycles not used for training (validation set) and the abnormal drive cycles (the test set). The relative differences of calculated reconstruction error are analyzed for their correlation to the abnormal and normal drive cycles. The trained autoencoder is expected to reconstruct normal drive-cycles with some degree of error. After extensive training, testing is performed by regenerating a subset of the healthy drive cycles. Test data belongs to a set of drive-cycle data from faulty powertrains which warranted replacement. The trained network can differentiate anomalous patterns in the abnormal drive-cycles through data regeneration, due to some distinct characteristics from the healthy drive-cycles indicated by a vast difference in the ranges of reconstruction error. From such data, an error threshold can be determined to predict whether the drive-cycle is normal or abnormal.

Iii-a Model

Figure 1

shows a representation of the proposed autoencoder architecture. The model is constructed for multi-channel time-series inputs of up to 64 channels, though could be designed for any number of input channels (i.e., features). The autoencoder is a feed-forward convolutional neural network with hidden layers constructed using convolutional and transposed convolutional layers, built using the Tensorflow Keras Layers API. The ReLU activation function is used for each convolutional layer output. The loss function computes a variation between the input and reconstructed signals, which are sampled from the drive-cycles. A custom training loop uses the gradient tape function to record forward pass operations and compute back the differential gradients on the trainable variables, minimizing the reconstruction error calculated for each training epoch. The Adam optimizer is used with an initial learning rate of

with a rate decay every 50 epochs of training. The network is trained using a step-by-step weight transfer scheme. The approach begins by training a network with very few layers, then transferring the weights to a deeper network where training is limited to the added layers. Additional epochs are then run, enabling training on the entire network for fine-tuning. The goal of the approach is to enable the use of a deeper network while preventing run-away gradients.

Figure 1: Process diagram for autoencoder

Drive-cycle sample matrices for input to the autencoder are with time-steps, features or signal channels, and an additional dimension for the channel last convention used in convolutional layers in the Keras Layers API. Layers are composed of 8 2-dimensional convolutional layers in the encoder mirrored by 8 transposed convolutional layers in the decoder for reconstruction of the latent signal with dimension

. The input data are padded in the number of channels to dimensions

, and the initial convolutional kernel is size followed by kernels for convolutional layers 2 through 8. The number of filters are 64, 128, 256, 512, 1024, 521, 256 and 128 for the respective layers. The output is cropped to the original signal dimensions in the final reconstruction.

Iii-B Evaluation

The proposed method of evaluation is the relative degree of reconstruction error between normal and abnormal data. The cost function used for evaluation is the sum of mean square error (MSE), mean absolute error (MAE), and the standard deviation () of the absolute difference matrix between input and reconstructed signals, shown here,


Iv Results

Figure 2 show the results of testing the trained autoencoder with three different sampling regimes on the Ford Explorer drive cycles. The results show the reconstruction error using the cost function for each category of data, presented as error bars: max, 75%, median, 25%, and min. The left plot in Fig. 2 shows the results of the cost function for the batch sampled data . The middle plot shows the average cost function result batched over each drive cycle sampled with a 128 time-step window . Each drive cycle is composed of roughly time-steps, therefore . Finally, the plot on the left tests the cost function result for each individual 128 time-step window sampling the entirety of available data. With the first two tests, which employed batch sampling, an error threshold was able to differentiate the normal and abnormal cycles with 100% accuracy in the test data. Testing the data by individual 128 time-step sample (approximately 0.8% of a drive cycle) differentiated normal and abnormal samples with 97% accuracy.

Figure 2: Reconstruction error calculated using autoencoder trained on test vehicle drive cycles. The autoencoder was trained using the batched and cropped drive cycle data (left), reconstruction error is recalculated using the cost function J showing the average reconstruction error per drive cycle (middle) and reconstruction error calculated per individual 128-observation sample in entire data set (right).
Data grouping Performance
Batch-size 256 accuracy: 100%
F1 score: 100%
Reconstruction error average per drive-cycle accuracy: 100%
F1 score: 100%
Individual sample* accuracy: 97.4%
recall: 95.5%
F1 score: 96.0%
  • 128 time-steps per sample

Table II: Proposed autoencoder performance differentiating drive cycles based on reconstruction error threshold

Iv-a Model Comparison

Key challenges exist when comparing the proposed autoencoder with other models.

  1. There exists no direct comparison for the automotive use-case to evaluate the relative efficacy of the autoencoder on vehicle system fault detection and deep learning applications with the particular sensor data is novel.

  2. The drive cycle features are a mix of continuous, non-continuous binary and ordinal signal features which presents challenges for most other models. The MSCRED model was tested with the drive-cycle data to compute the signal reconstruction using their proposed signature matrix transformations and recurrent neural network autoencoder. MSCRED takes a similar approach to identify anomalies in multi-variate time series data, however results are only presented on continuous milti-sensor data sets. The temporal patterns of cross sensor relationships in the drive-cycles after feature transformation to signature matrices are not apparent, and are likely the cause of a vanishng gradient.

  3. The dimensionality requirement of the autoencoder architecture is a limitation, lower dimensional data can be fixed with padding, but causes over-fitting, while higher dimensional data must be reduced to the dimensionality of the architecture. Time-series data augmentation methods are studied [22], however one must preserve the time and space relationships between features.

To test the generalizing ability of the autoencoder to distinguish abnormalities in time-series data, a comparison data set with labeled anomalous and normal temporal events is trained using the autoencoder and analyzed for the reconstruction loss between the labeled anomalous and normal time-series events. The labeled data set is a multi-variate time series from Kaggle composed of approximately 509k samples with 11 features. Approximately 0.09% of the data is classified as anomalous [23]. Anomalous and normal data is separated, and anomaly labeled events occur approximately once every 9-time steps. The data is broken up into chunks of 126 time-steps evenly dividing anomalous signals in the test batch while approximating the dimensionality of the autencoder. Samples were randomly shuffled and augmentation was performed to increase the feature dimension. The feature axis was replicated 5 times, randomly shuffled, and concatenated on the entire data set to expanding the 11 features to 55. Results shown in Figure 3 demonstrate the efficacy of the autoencoder to differentiate by reconstruction error anomalous and normal time-series data from a data set unrelated to the drive-cycles.

Figure 3: Reconstruction error calculated using autoencoder trained on Kaggle anomaly data set[23] showing results for batched data used in training (left) and re-calculated using individual samples (right).

To test the efficacy of the unsupervised deep learning approach to other approaches which take require less training and network complexity, a model comparison performed on the drive-cycle data using the non-deep learning clustering approaches in the PyOD (Python outlier detection) library; the angle-based anomaly detector (ABOD)[24] and KNN [16], as well as a neural network autoencoder using fully connected dense layers in Fig. 4

. The PyOD classifiers assign an anomaly score on observation level events using the performance metric of the classifier. ABOD uses a cosine variance score to it’s nearest neighbors, where a large cosine variance of surrounding neighbors indicates good clustering, while outliers far away from the clusters have smaller cosine variances approaching zero. The KNN classifier calculates a Minkowski distance


between k nearest neighbors. The autoencoder calculates a pairwise distance matrix between the input and reconstructed data. The original data sets used for training, testing, and validation of the autoencoder were reduced by calculating an average feature vector for each sample. In other words, the sample data set that was used to train the autoencoder

with i = 64 iterations of batches in the training data set, b = batch size of 256, m = 128 time-steps, and n=58 features is vertically stacked to 16,384 samples . Subsequently, each mxn sample of 128 time-steps by 58 features is reduced to an average feature vector so that , representing the data set in a format which the PyOD outlier detectors are able to interpret. The results indicate that the drive-cycle data sets are not as clearly distinguishable using the outlier detectors, however the ABOD and KNN results do show a notable increased outlier density in the fault drive-cycles relative to the normal drive cycles.

Figure 4: Performance metrics calculated using PyOD anomaly detectors: ABOD (left), PyOD autencoder (middle), and KNN (left), trained and tested using Ford Explorer drive cycles.
Outlier detector algorithm Performance
ABOD[16, 24] accuracy: 91.1%
recall: 96.3%
F1 score: 88.4%
Autoencoder[16] accuracy: 60.6%
recall: 36.8%
F1 score: 39.6%
KNN[16] accuracy: 89.7%
recall: 84.1%
F1 score: 85.2%
Table III: Anomaly detector performance differentiating drive cycles based on outlier scores

V Conclusion

Preliminary results show that the reconstruction error of faulty drive cycles deviates significantly from that of the reconstruction error of healthy drive cycles by training an autoencoder on a generalized multi-sensor data set from the powertrain domain without knowledge of the specific spatial-temporal events that express the abnormalities. The results may be applicable for identifying faulty drive-cycles, and improving the accuracy of system prognosis and predictive maintenance in connected vehicles. The deep convolutional neural network autoencoder is significantly more successful at differentiating normal and abnormal vehicle drive cycles than traditional clustering techniques and a non-convolutional fully-connected neural network autoencoder. The model can be generalized to detect abnormalities in other time-series data non-related to drive cycles; however, the data must be augmented to meet the dimensionality requirements of the large network architecture used.

V-a Future work

This work demonstrated an algorithm that could distinguish anomalies in multi-channel temporal drive-cycle data in a hybrid-electric vehicle. In the future, we will look at how these results could be extended to consider multi-vehicle data across a connected fleet, which could further improve the ability of the autoencoder architecture to accurately discern anomalous conditions.


This paper describes work performed as part of an ongoing project advised by Dr. Timothy Havens with funding provided by Ford Motor Company. Initial development of the code and architecture was completed by Dr. Eisa Hedayati. Domain knowledge on the drive cycle data was provided by Ford Motor Company engineers. Supplemental experimentation including evaluation and testing of the autoencoder with additional data sets, testing of the drive-cycle data sets on other models, and relevant background was completed by Anthony Geglio.


  • [1] A. Zhou, Z. Li, and Y. Shen, “Anomaly detection of CAN bus messages using a deep neural network for autonomous vehicles,” Applied Sciences, vol. 9, no. 15, 2019.
  • [2] E. Novikova, V. Le, M. Yutin, M. Weber, and C. Anderson, “Autoencoder anomaly detection on large CAN bus data,” in In Proceedings of DLP-KDD.   Association for Computing Machinery, 2020.
  • [3] N. Abdennour, T. Ouni, and N. B. Amor, “Driver identification using only the CAN-bus vehicle data through an RCN deep learning approach,” Robotics and Autonomous Systems, vol. 136, p. 103707, 2021.
  • [4]

    P. Yadav, S. Jung, and D. Singh, “Machine learning based real-time vehicle data analysis for safe driving modeling,” in

    Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, ser. SAC ’19.   New York, NY, USA: Association for Computing Machinery, 2019, p. 1355–1358.
  • [5] C.-F. Yeh, L.-T. Lin, P.-J. Wu, and C.-C. Huang, “Using on-board diagnostics data to analyze driving behavior and fuel consumption,” in Advances in Smart Vehicular Technology, Transportation, Communication and Applications, Y. Zhao, T.-Y. Wu, T.-H. Chang, J.-S. Pan, and L. C. Jain, Eds.   Springer International Publishing, 2019, pp. 343–351.
  • [6]

    X. Qi, Y. Luo, G. Wu, K. Boriboonsomsin, and M. J. Barth, “Deep reinforcement learning-based vehicle energy efficiency autonomous learning system,” in

    2017 IEEE Intelligent Vehicles Symposium (IV), 2017, pp. 1228–1233.
  • [7] H. Wang, Y. Yu, Y. Cai, X. Chen, L. Chen, and Q. Liu, “A comparative study of state-of-the-art deep learning algorithms for vehicle detection,” IEEE Intelligent Transportation Systems Magazine, vol. 11, no. 2, pp. 82–95, 2019.
  • [8] J. Fayyad, M. A. Jaradat, D. Gruyer, and H. Najjaran, “Deep learning sensor fusion for autonomous vehicle perception and localization: A review,” Sensors, vol. 20, no. 15, 2020.
  • [9] H. A. M. Sayedahmed, E. Mohamed, and H. A. Hefny, “Computational intelligence techniques in vehicle to everything networks: A review,” in Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2020, A. E. Hassanien, A. Slowik, V. Snášel, H. El-Deeb, and F. M. Tolba, Eds.   Springer International Publishing, 2021, pp. 803–815.
  • [10] I. 22, “Road vehicles — Diagnostic systems — Part 2: CARB requirements for interchange of digital information,” International Organization for Standardization, Standard, 1994.
  • [11] I. S. 31, “Road vehicles — Communication between vehicle and external equipment for emissions-related diagnostics — Part 3: Diagnostic connector and related electrical circuits: Specification and use,” International Organization for Standardization, Standard, Apr. 2016.
  • [12] N. Navet, Y. Song, F. Simonot-Lion, and C. Wilwert, “Trends in automotive communication systems,” Proceedings of the IEEE, vol. 93, no. 6, pp. 1204–1223, 2005, conference Name: Proceedings of the IEEE.
  • [13] R. Prytz, S. Nowaczyk, and S. Byttner, “Towards relation discovery for diagnostics,” in Proceedings of the First International Workshop on Data Mining for Service and Maintenance :, 2011, pp. 23–27.
  • [14] C. C. Aggarwal, Outlier Analysis.   Springer International Publishing, 2017.
  • [15] Y. Su, Y. Zhao, C. Niu, R. Liu, W. Sun, and D. Pei, “Robust anomaly detection for multivariate time series through stochastic recurrent neural network,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’19.   Association for Computing Machinery, 2019, pp. 2828–2837, event-place: Anchorage, AK, USA.
  • [16] Y. Zhao, Z. Nasrullah, and Z. Li, “PyOD: A python toolbox for scalable outlier detection,” Journal of Machine Learning Research, vol. 20, p. 7, 2019.
  • [17] C. Zhou and R. C. Paffenroth, “Anomaly detection with robust deep autoencoders,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’17.   New York, NY, USA: Association for Computing Machinery, 2017, p. 665–674.
  • [18] L. Ruff, R. A. Vandermeulen, N. Görnitz, A. Binder, E. Müller, K.-R. Müller, and M. Kloft, “Deep semi-supervised anomaly detection,” in International Conference on Learning Representations, 2020.
  • [19] C. Zhang, D. Song, Y. Chen, X. Feng, C. Lumezanu, W. Cheng, J. Ni, B. Zong, H. Chen, and N. V. Chawla, “A deep neural network for unsupervised anomaly detection and diagnosis in multivariate time series data,” in

    Proceedings of the AAAI Conference on Artificial Intelligence

    , vol. 33, no. 1, 2019, pp. 1409–1416.
  • [20]

    D. Li, “Anomaly detection with generative adversarial networks for multivariate time series,” in

    7th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications on the ACM Knowledge Discovery and Data Mining conference, 09 2018.
  • [21] J. Li, H. Izakian, W. Pedrycz, and I. Jamal, “Clustering-based anomaly detection in multivariate time series data,” Applied Soft Computing, vol. 100, p. 106919, 2021-03-01.
  • [22] Q. Wen, L. Sun, F. Yang, X. Song, J. Gao, X. Wang, and H. Xu, “Time series data augmentation for deep learning: A survey,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Z.-H. Zhou, Ed.   International Joint Conferences on Artificial Intelligence Organization, 8 2021, pp. 4653–4660, survey Track.
  • [23] Alexander Scarlat. (2021) Anomaly detection in multivariate time series. [Online]. Available:
  • [24]

    H.-P. Kriegel, M. S hubert, and A. Zimek, “Angle-based outlier detection in high-dimensional data,” in

    Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD 08.   ACM Press, 2008, p. 444.
  • [25] N. I. of Standards and Technology, “Dataplot reference manual,” U.S. Department of Commerce, Washington, D.C., Tech. Rep., 2001.