Explosively growing data from Internet of Things (IoT) are now flooding into data management systems for processing and analysis. The availability of massive historical data, powerful deep learning frameworks and excessive computation power are together boosting the development of new data-driven models over complex mechanical systems, which are used to characterize the behaviors of the systems running with highly dynamic external conditions and aging equipment.
, these successes may not be easily repeated in another setting, mostly due to the unaffordable cost to meet the expected data quality and quantity. Particularly, powerful machine learning models are usually hungry for big quality data, such as labels on equipment failure events, which involves huge human efforts in reading and annotating the historical data. To better address these demands on training data, we discuss two types of common limitations we meet in real-world applications.
The first limitation is the lack of configuration coverage. In real-world mechanical systems, there are usually a variety of controlling parameters. In chiller plant, for example, the parameters contain the variable speed drives(VSD) controlling, the frequencies of the pumps and cooling towers in the plant. Due to the limited variety of the conventional chiller plant control strategies, a chiller plant is operated under a small number of candidate configurations over the control parameters. This leads to a potential risk of overfitting of the data-driven model. The second limitation is the lack of label coverage. In real-world IoT systems, events of interests, e.g., failures of pumps 
, are usually very rare. Every individual failure event, on the other hand, may cause huge financial loss. Supervised learning, however, builds reliable and meaningful models only when there are sufficient labelled data linked to the detection/prediction target event.
Fortunately, domain adaptation, e.g., [2, 11, 12, 15, 22] which enables the system to reuse existing data from similar systems when building models over a new system with limited data by aligning the features and transforming the old model based on observations over these domains, can lift this restriction in our IoT applications. While most of the existing approaches of domain adaptation are designed for non-sequential domain with fixed number of dimension, the neglect of temporal information is an important source of performance degradation, when these methods are applied on time series data directly. Recently, domain adaptation for time series data has received wide attention. 
adversarially captures complex and domain-invariant temporal relationships by using variational recurrent neural network. However, this method ignores the causal mechanism in time series data, because it mainly takes the hidden state of final time step into account instead of the hidden states of all the time steps.
In order to figure out the aforementioned challenges, we consider what can be transfered and what hinder the transferability in time series domain adaptation. Firstly, we assume that the causal mechanism is invariant. Because the physical mechanism is invariant among domains, and causal mechanism is a kind of this physical mechanism. What’s more, causal mechanism denotes the directed path between two random variables. In a word, a set of cause variables have impacts on the set of effect variables. According to our observation, there are two significant causal mechanisms of time series data in the mechanical systems. One is dynamic causal mechanism, which means that one sensor value have influence on another sensor value in any time step. The other is temporal causal mechanism which means that the past values of one sensor value should contain information that helps predict another sensor value above.
However, in order to transfer causal mechanism, three obstacles need to be tackled, which as shown in Figure 1. (1) Inter-domain value range shift means value ranges of sensors vary with domains. For example, the values range of the temperature sensors varies with the location of the machine. And the model that is trained by the machine with lower temperature range might not be suitable for the machine with higher temperature range. (2) Inter-domain time lag shift means the time lags of causal effect vary with domains. According to the ideal gas law , where and are the pressure, volume and absolute temperature respectively. is the ideal gas constant and is the number of moles of gas. Because different boilers use different kinds of fuels, the ratio between temperature and pressure is different, which leads to different response time. (3) Intra-domain causal mechanism means that the causal mechanism between sensors in each time step. For example, in Figure 1, the drop of temperature () causes the drop of of pressure () i.e. , and the operation status is decided by temperature and pressure jointly, i.e. .
In this paper, we utilize the invariant causal mechanism and solve the aforementioned obstacles by proposing a novel causal mechanism transfer network (CMTN). Firstly, because of different value ranges in different domains, we devise different feature extractors for the source and target domains separately. Then, we introduce two kinds of attention mechanisms to transfer two kinds of causal mechanisms according to our observations over the real data as shown in Figure 1. In order to tackle the inter-domain time lag shift, we propose the transferable temporal attention mechanism. In order to tackle the intra-domain causal mechanism shift, we propose the transferable intra-sensors attention mechanism. Furthermore, We apply our CMTN to two case studies, including chiller plant optimization under lack of configuration coverage and boiler failure detection under lack of label coverage, and achieves significant improvements in modeling accuracy and consequently promising performance in their respective settings.
The rest of the paper is organized as follows. Section II reviews existing studies on time series modeling, domain adaptation, domain adaptation on time series as well as attention mechanism. Section III provides the problem definition on time series domain adaptation and adversarial domain adaptation model. Section IV proposes motivation based on the observation over the time series data in the mechanical systems and our causal mechanism transfer network for time series domain adaptation. Section V presents case studies on two completely different areas and conduct the ablation study on our CMTN. Section VI concludes the paper with future work discussion.
Ii Related Work
In this section, we first review the existing techniques on time series modeling and domain adaptation, and then we give a brief introduction about time series domain adaptation and attention mechanism.
Modeling and prediction on time series is a traditional research problem in computer science, with a number of successful cases, e.g., Autoregressive model and ARMA  . With the introduction of domain expertise and graphical model, new approaches are proposed, e.g., , to enhance prediction accuracy. The quick growth of computation power, on the other hand, has propelled the success of deep neural network models, e.g., RNN, LSTM  and GRU , specifically designed for time series domain. In this paper, we adopt LSTM as our backbone network to model time series data.
Domain Adaptation: Unsupervised domain adaptation is a very important problem. The mainstream methods aim to extract the domain invariant feature between domains. Maximum Mean Discrepancy is one of the most popular methods by using kernel-reproducing Hilbert space [3, 19, 13]. Second-order statistics is proposed for unsupervised domain adaptation . Second or higher order scatters statistics can be used to measure alignment in CNN .
Another essential approach in unsupervised domain adaptation is to extract the domain-invariant representation by introducing a domain adversarial layer for domain alignment. 
introduces gradient reversal layer to fool the domain classifier and extracts the domain-invariant representation, borrows the idea of generative adversarial network (GAN)  and proposes a novel unified framework for adversarial domain adaptation.
Based on the adoption of causality view over the variables, the adaptation scenario can be determined by causal mechanism.  discusses three different application scenarios in domain adaptation. These scenarios respectively are target shift, condition shift and generalized target shift. Based on , [36, 15] investigate more on the generalized target shift in the context of domain adaptation.
Domain Adaptation on Time Series: Though unsupervised domain adaptation performs well in many tasks in computer version, there is limited work of domain adaptation in time series data. In NLP, 
uses distributed representations for sequence labeling tasks. simultaneously uses domain specific and invariant representations for domain adaptation in sentiment classification task while  solves the same problem by combining the generic embeddings with domain-specific ones. And  use variational method that produces a latent representation that captures underlying temporal latent dependencies of time series samples from different domains. However, this method extracts the domain-invariant representation with the final hidden state of RNN, which ignore the whole time series and its properties. In this paper, we proposed an unsupervised domain adaptation method for time series data, which extracts domain-invariant representation in the time-series level and consider the causal mechanism in time series data. What’s more, we figure out time series domain adaptation in a causal view.
Attention mechanism is also very significant in time series modeling. Motivated by how human beings pay visual attention to different regions of an image or correlate words in one sentence, attention mechanisms have become an integral part of network architectures in natural language processing and computer vision tasks. introduced a general attention mechanism into machine translation model which allow the model to automatically search for parts of the correlative words.  achieve promising performance in image caption by using a global-local attention method by integrating local representation at object level with global representation at image-level. Based on Transformer , a general attention mechanism architecture, BERT  achieves the state-of-the-art performance in question answering and language inference. Observing that not all region of an image is transferable,  introduce attention mechanism into domain adaptation which focuses on transferable regions of an image.
In this paper, we introduce attention mechanism into time series domain adaptation, focusing on two kinds of transferable causal mechanism: dynamic causal mechanism and temporal causal mechanism. In this paper, we first present how the causal mechanisms happen in the time series by data observation, and then explain how to transfer this causal mechanism by introducing a dual attention mechanism.
Iii-a Problem definition
We first denote as a multivariate time series sample with time steps, where and as the certain label. When is a real number, the prediction on is a regression problem over time series. When is a categorical value, it becomes a multi-class classification problem. We assume that and , which represent source domain and target domain respectively, have different distributions but share the same causal structure. and which are sampled from and separately, denote the source and target domain dataset. We further assume that each source domain time series sample comes with , while target domain has no labelled sample, and our goal is to devise a model that can predict label given time series sample from target domain.
Iii-B Base Model
We pick up recurrent neural network model as the base approach for our time series modelling, because of its huge performance improvement over conventional approach 
. Specifically, we develop domain adaptation techniques based on Long Short-Term Memory (or LSTM in short). In this subsection, we present the basic of LSTM and its usage in our target mechanical system. Formally, we define:
in which denote the LSTM that accepts a time series sample as input and then outputs a time series hidden states and represent the parameters of LSTM.
Dozens of domain adaptation algorithms, which are proposed in last decade, has shown significant performance improvement in their respective setting. We opt to use the strategy proposed by Ganin 
. Generally speaking, their strategy models invariant features across domains by optimizing a domain predictor that is expected to fails to tell whether the extracted feature is from the source or the target domain. And we consider the feature extracted by aforementioned method is more robust for multiple domains. One of the biggest benefits of the strategy is that the domain prediction loss, which denotes the loss of domain predictor, could be easily merged into theregression/classification prediction loss, therefore enabling a holistic model training for both domain adaptation and label prediction optimization.
A straightforward solution to time series domain adaptation is to directly reuse existing algorithms originally designed for non-sequential data. Because the final hidden state is assumed to contain all the message of time series, so we take
as the input of label predictor and domain predictor as shown in equation (2). When training LSTM by using data from multiple domains, the objective loss function consists of two parts, the label loss for the source domain data and the domain prediction loss over both source and target domains. The label loss is used to minimize the error of LSTM when predicting the labels, while the domain prediction loss is used to control the alignment of features such that extracted features are consistent across domains.
in which represents label predictor with parameters and represents domain predictor with parameters . The parameters and are trained by minimizing the following objective function.
In which denotes the domain number, We let and as source and target domains labels, respectively.In next section, we will introduce our causal mechanism transfer network (CMTN) motivated by our data observation.
The above base model only considers the alignment of the hidden representation of the data, while ignores the inherent properties of the time series data. Fortunately, we find that the causal mechanisms are invariant across the domains, due to the fact that all the machines from different domains still follow the same physical mechanism. Here the causal mechanism refers to a process that a cause contributes to the production of an effect. For example, as shown in Figure1, in the boiler system, the variation of temperature () causes the variation of pressure (P)(i.e. ). Furthermore the temperature () and the pressure (P) effect the operation status jointly (i.e. ).
Such invariant causal mechanism motivates our Causal Mechanism Transfer Network(CMTN) for time series domain adaptation– extending the existing time series representation model with the casual mechanism of the data. Generally, we attempt to extend the sequence presentation model into two parts, the domain-invariant causal mechanism part and the domain-specific part. Formally, we extend to , by splitting the parameters into three parts: , and . Among them and denotes the domain-specific parameters for the source and target domain respectively, and denote the domain-invariant parameters.
However, it is still a challenging task to model the invariant causal mechanisms over the dynamic time series data, which is usually hindered by the following three phenomena of the limitations: inter-domain value range shift, inter-domain time lag shift and intra-domain causal mechanism shift. These limitations come from our observation over the data. For example, as shown in Figure 1, the value range of temperature of the chiller or boiler varies with the location of the machine; The time lag of causal effect (i.e. ) varies with domains. The factors which effect the operation status can be more complex, for example temperature and pressure are jointly making effects on the operation status. In the following, we will provide the details to solve the above three obstacles under the above general causal mechanism transfer framework.
Iv-a Domain Specific Feature Extractor
Observation1 Inter-domain value range shift:
First of all, it is obvious that value range over the input vectors varies with different domain, which is shown in Figure2. In a boiler system, for example, the minimal and maximal values of certain sensor readings are very different from boiler to boiler. Traditional domain adaptation techniques, e.g. , leave it to the feature alignment. It may affect the LSTM model which is shared by all domains when generating the features for final classification and regression task.
Motivated by our observations over varying value range over the input vectors in different domains, we insert a domain-specific feature extractor between the input and LSTM. If we use Ganin’s method  directly, the shared LSTM that simply aligns the sensor readings of different value range will not achieve ideal performance. In our solution, we intentionally add a new layer for domain-specific feature extraction, i.e., the feature extractor in Figure 5. It is expected to handle a wide spectrum of domain alignment problems by pre-processing the input values in an automatic manner. Formally, we have:
in which and are composed of a simple neural network respectively, are learnable projection matrices. and is the feature generated by the source domain specific feature extractors. Similarly, we let denote the feature generated by the target domain specific feature extractors, and further let and denote feature generated by any domain specific feature extractors and any domain specific feature extractors. Subsequently, we will take as the input of the base model in section III-B.
As a summary, and in this section are domain-specific parameters, which are used to capture the different value range for the source and target domain respectively. are the domain-sharing parameters, which are used to model the domain-invariant causal mechanism.
Iv-B Transferable Temporal Causal Mechanism
Observation2 inter-domain time lag shift: Temporal causal mechanism [31, 17, 5] is important to the modeling of multivariate time series data, for example, the relationship between temperature and pressure follows the Charles’s law. However, because of the properties of different domain, such as the different degree of aging of different machines, there are time lags between different domains, which is shown in Figure 3.
In mechanical system, the readings of sensors follow temporal causality, such as the relationship between the temperature and pressure. Formally, time series is said to be temporal-cause if it can be shown that those values of provide statistically significant information about future values of .
We can find that the ubiquity of temporal causality exists in the mechanism systems, but it comes with time lags due to properties of different domains. For example, in the chiller plant systems, the aging of pumps might lead to lags in response when the temperature is changing. In order to figure out this situation, we introduce the supervised attention mechanism that can select the relevant hidden states adaptively, i.e., by employing attention mechanism, the contributing hidden state might be assigned a larger weight, so the effectiveness of time lags will be negligible. Specifically, to calculate the context vector at time step over each hidden state before the final time step , we define the weights of each hidden state as follow:
in which and are trainable parameters, and is the candidate context vectors over all the hidden states except the last one. We generate the final context vectors by concatenating and final hidden state . The aforementioned process is as follows:
As a summary, are the domain-sharing parameters, which are used to model the transferable temporal causal mechanism proposed in this subsection.
Iv-C Transferable Dynamic Causal Mechanism
Observation3 intra-domain causal mechanism shift: As shown in Figure 4, we can find that the causal effect between sensors are changing over time, which depends on the sensor readings in the last time step. In chiller plant system, higher temperature leads to the increment of relative humidity, which further rev the chilled water pump, while lower temperature leads to the falloff of relative humidity, which further revs the condenser water pump. This causal effects are actually some physical mechanism, so it’s reasonable to be transferred from the source domain to the target domain.
Next we introduce the transferable dynamic causal mechanism motivated by the aforementioned observation. In another word, and share the same dynamic casual mechanism. To address this issue, given the -th dimension of the -th time step of domain specific extracted feature(i.e., ), we employ a self-attention mechanism that generates a transferable weight over sensors to adaptively capture the dynamic correlation of the multivariate time series data. Formally, we can calculate the weight of -th feature at -th time step (i.e., ) by:
in which and are trainable parameters. The attention weights are jointly generated by the historical hidden state of LSTM as well as current domain specific feature , and it also representation which sensor plays an important role in final prediction. Here, as the vector of weights of each sensor. After generating the intra-sensors attention weight, the weighted sensor readings are calculated with:
The aforementioned process is as follows:
As a summary, are the domain-sharing parameters, which are used to model the transferable dynamic causal mechanism proposed in this subsection.
Iv-D Model Summary
The architecture of CMTN is shown in Figure 5. First, We take the time series sensor value as the input of domain-specific feature extractors, which mitigate the influence of different value ranges and the output of the extractors is feature . Second, the features are aligned by dynamic causal transfer layer which utilizes the feature and the hidden state from the last time step and we further get the weighted feature . Third, by taking the hidden state from last time step and weighted feature as input, LSTM generates the hidden state . Fourth, by utilizing all the hidden states, the temporal causal transfer layer calculates the final context representation which not only contain all the message of the time series but also extract and highlight the most important state. Finally, we employ the gradient reversal layer to fool the domain predictor and the label predictor to generate the final decision.
The overall objective function of our approach is summarized as follows:
where and is the size of source domain and target domain dataset, is the parameter that trade-off the label prediction loss and the domain prediction loss in this unified optimization.
In the training procedure, we employ the stochastic gradient descent algorithm to find the optimal parameter setas follows. In this procedure, all the samples are used, including the labelled source domain samples and the unlabelled target domain samples.
In the predicting procedure, we input the target domain samples into the model through the target feature extractor, and the labels of target domain samples are predicted as follows,
V Case Studies and Experiment
In this section, the proposed CMTN method is experimental studied on two real-world applications: Chiller Plant Optimization and Boiler Fault Detection.
Chiller Plant Optimization: The chiller plant data which is provided by Kaer Pte. Ltd, consists of chiller plant sensor data collected from Building Management Systems (BMS) from two sites, each considered as one domain. The learning task is to predict total system power of a chiller plant, which is a regression problem, for energy optimization. We extract training data samples from the target domain, where the VSD speeds of condenser water pumps, chilled water pumps and fans of cooling towers are restricted to of allowed range. This is to simulate the situation at new chiller sites with insufficient data. Such data insufficiency is also common at chiller sites that have been running for years. We have encountered several chiller sites with VSD speeds set at a fixed speed for all the time. The test data of the target domain contains data samples with full range of VSD speeds. Details of the dataset in terms of the start and end date, and the sizes of the source domain and the target domain are provided in Table I. Table II lists all the features. Training and test data are split according to time. The first data are used as training data while the rest are used as test data.
Different from approaches in  that decomposes a chiller plant into multiple components and models each component separately, we use a black-box approach based on LSTM to model the total system power. This is because it is less straightforward and even difficult to apply domain adaptation technique on a complex system with multiple inter-connected models.
|Start Date||End Date||Size|
|VSD speed of chilled water pump ()|
|VSD speed of cooling tower fan ()|
|VSD speed of condenser water pump ()|
|Relative humidity ()|
|Dry bulb temperature (outdoor) (C)|
|System cooling load (RT)|
|Number of chillers on|
|Number of chilled water pumps on|
|Number of cooling towers on|
|Number of condenser water pumps on|
Boiler Fault Detection: The boiler data which is provided by SK Telecom, consists of sensor data from five boilers from 24/3/2014 to 30/11/2016. Each boiler is considered as one domain. The learning task is to predict faulty blow down valve of each boiler. All the features used for this task is listed in Table III. In data pre-processing, we replace value with for columns with continuous increasing values along time, as indicated by “delta” in Table III. Notice that the boiler data is extremely unbalanced, as can be seen from the statistics of the five boilers listed in Table IV. Less than of the total samples have faulty labels, with boiler 1 having faulty samples. Due to lack of faulty labels, we use all the faulty data of source domains as training data for domain adaptation. To handle the extreme unbalance of the data, we apply down sampling on the normal samples of the source domain to obtain a balanced training dataset.
|Steam pressure main header|
|Temperature concentrated water|
|Operating time feed water (delta)|
|Temperature exhaust gas|
|Volume feed water (delta)|
|Temperature feed water|
|Temperature tube wall|
|Power usage meter (delta)|
|Operating time chemical injection (delta)|
|Combustion time (delta)|
|Number of ignition (delta)|
|Boiler ID||# of samples||# of faulty samples||Ratio|
V-B Evaluaion Metrics
We use application specific criteria to evaluate the performance of our model and the baselines. For Chiller Plant Optimization case, we use the mean absolute percentage error (MAPE) to evaluate the performance of proposed model. MAPE is formally defined as follows:
where is the actual value and . For Boiler Fault Detection, we use another two criteria to evaluate the performance of boiler fault detection:
Accuracy of fault detection as the percentage of correctly predicted samples.
Area under the curve (AUC) of the correctly predicted faulty samples.
It is worth noting that we report the AUC over the fault samples in our experiment. As the boiler data is extremely unbalanced, a prediction model that always predicts ’normal’ could achieve accuracy and AUC over the fault samples could enable us to have a better understanding of the performance of the model.
We compare our approach against the following baselines:
LSTM_S2T uses source domain data to train a LSTM model and apply it on the target domain without any adaptation(S2T stands for source to target).It is expected to provide the lower bound performance.
Ganin implements the domain adaptation architecture proposed in  with GRL(Gradient Reversal Layer) on LSTM, which is a straightforward solution for time series domain adaptation.
Besides the above baselines, we also consider three variations of our approach to evaluate the effect of individual component as:
CMTN-NDE: We only remove the domain specific extractors.
CMTN-NGA: We only remove the temporal causal transfer layer.
CMTN-NLA: We only remove the dynamic causal transfer layer.
Our model and the baselines are implemented with Tensorflow on the server with one GTX-1080 and Intel 7700K. We set the length of time series sample as 6, i.e. . The setting of each model are provided in Table V.
|LSTM hidden layer size||500||500||500||500|
|MLP hidden layer size||100||100||100||100|
|Domain specific feature size||100||100||100||100|
V-D Results on Chiller Plant Optimization
Accuracy of the system power prediction: The MAPE of all model for total system power prediction are reported in Table VI. Our approach achieve the lowest MAPE among all models. It’s lower that of Ganin and lower than that of VRADA. The MAPE of CMTN-NDE is and lower than that of Ganin and VRADA respectively. This indicates the effectiveness of transferable temporal and dynamic causal mechanism, which is different from that in Ganin and VRADA. The MAPE of LSTM_S2T is the worst, which simply implies that applying source domain knowledge directly to target domain without adaptation is not going to work on the chiller plant.
Power saving after using the power prediction: In order to evaluate the usefulness of domain adaptation models on energy saving, we conduct simulation of real-time VSD speed optimization on the test data of target domain as proposed in . The main idea is to search for optimal VSD speeds of pump and fans every time steps with the minimum total system power based on the domain adaptation models, assuming other features (e.g., weather, cooling load, etc) remain the same.
|Model||Energy (kWh)||Energy Difference (%)|
Upon finding the optimal speed, we first train a LSTM_T2T model, which is trained and tested with target domain training and test dataset respectively. And then we apply the most accurate LSTM_T2T model to predict the corresponding total system power and compare it against the original power. The result of Ganin, VRADA and our apporach are plotted in Figure 6, 7 and 8 respectively, with 5-day simulations covering4 weekdays and 1 weekend day. Note that the energy consumption of original setting is already optimization outcomes of our previous data-driven method in .
Our approach with optimization is able to further reduce energy consumption, by consistently reaching lower power in most of the cases as show in Figure 8, while Ganin’s approach generates similar or even higher power after optimization due to it’s MAPE on power prediction.
The corresponding energy consumption (kWh) and percentage of energy saving in total system power, if possible, of all models are reported in Table VII. Due to the high requirement on accurate modeling, only our approach is able to achieve energy saving by in the simulation. With electricity tariff being around SGD$0.20, the optimization based on our domain adaptation model can save roughly SGD$65.2 in five days. Since all domain adaptation techniques tested here do not use any labels from target domain, the saving achieved by our approach is significant.
V-E Results on Boiler Fault Detection
Accuracy of the boiler fault detection: We use boiler 4 as the source domain, which has the median number of fault labels among the five boilers. The rest of the boilers are used as target domains. We report AUC of each source-target pair in Tables VIII, IX, X and XI respectively.
Overall, our approach achieves the highest accuracy and AUC on all setting. It outperforms Ganin and VRADA by improving the AUC over faulty samples, for example, by for Ganin(from 0.475 to 0.877 in Table X) and by for VRADA(from 0.720 to 0.877 in Table X) on pair Boiler 4 and Boiler 3 (denoted by ). All models perform well on pair and
. Even LSTM_S2T can achieve AUC over 0.930 and 0.864 respectively. This is probably because these boilers (i,e., boiler 2, 4 and 5) encounter similar problems, i.e., faulty blow down valve, after installation. Therefore they tend to share more common properties without adaptation that may result in fault due to issues in installation. Even in such case, domain adaptation is able to further improve the accuracy and AUC, for example, byfor pair than LSTM_S2T.
However, the performance on pair and are much worse than the other cases. The highest AUC over faulty samples on pair is only 0.707(Table VIII). The reasons are two fold: first, these two target domains, i.e., Boiler 1 and Boiler 3, contain much fewer faulty labels than the others. This makes it more difficult to learn domain specific feature extractor. Second, these two boilers do not encounter ’faulty blow down valve’ problems after installation. Thus they tend to share less similar properties with the source domain.
However, the improvement over AUC of LSTM_S2T by our domain adaptation approach is significant in such case, e.g., by on and on , though they have not yet reached the level for reliable industrial adoption. Inspired by these observations, a possible solution for quick examination of whether domain adaptation technique would apply on a new domain is to use S2T as the baseline. If S2T can achieve reasonable performance, it shows higher chances to obtain a promising result with domain adaptation. We leave this as our feature work.
V-F Ablation Study
Study on the domain specific feature extractor: The value ranges of some sensors of each boiler with wide difference are shown in Table XII, and we can find that boiler 3 contains the largest otherness of the value range among all the boilers. At the same time, the experimental result reveal that CMTN-NDE, which removes the domain specific extractors, gains significant drop over baselines compared with CMTN and even gets a lower AUC score than VRADA. From the result of boiler fault detection, we observe that: 1) Different value range of sensors can lead to negative transfer. 2) The domain specific feature extractors can mitigation the domain-variant influence.
|Boiler||Operating time feed water||Temperature Exhaust Gas||Power usage meter||Temperature Tube Wall|
Study on the transferable temporal causal mechanism: Motivated by the fact that temporal causal mechanism keeps invariable among domains while time lag varies, we adopt attention mechanism for transferable temporal causal mechanism module, which not only consider the final hidden state, but also the others. Longer the input time series is, less information about preceding information is included in the final hidden state. Therefore, we evaluate the effect transferable temporal causal mechanism module by taking time series with different length as input, the experiment is shown in Figure 9.
According to the result, we can observe that: 1)The performance of TCMTN-NGA is still better than VRADA and the longer the sequence length, the larger the gap between TCMTN-NGA and VRADA, which reflect the useless of domain specific extractors and transferable dynamic causal mechanism. 2)The AUC of Ganin, VRADA and TCMTN-NGA drop sharply with the increasement of the length of the time series while slope of CMTN is much small than other compared approach. This is because CMTN applies temporal causal mechanism to all the hidden state, which utilizes all the hidden states and decreases the effect of domain-variant time lag and capture the temporal causality between time series at the same time. Though VRADA can capture complex and domain-invariant temporal relationships, it fails in time-series level feature alignment, so the increasement of sequence length will make a great impact on transferability.
Study on the transferable dynamic causal mechanism: As shown in Table VIII, IX, X and XI, we observe that: 1) the combination of domain specific extractors and transferable temporal causal mechanism shows superiority against VRADA, especially in . 2) After appending the dynamic temporal causal mechanism, the experiment result improves ulteriorly, which demonstrates the importance of transferable dynamic causal mechanism. VRADA and Ganin simple consider that the weight of each sensor in each time step are the same, and the main drawback is that some sensor value might be useless and even have interference effect to detection.
In this paper, we present novel Casual Mechanism Transfer Network for time series domain adaptation. We demonstrate the usefulness of the approach on two real-world case studies on mechanical systems. The case studies show positive results on model performance improvement even when the mechanical system lacks labels over historical data. By deploying these data-driven models, we are capable of reducing energy consumption of chiller plant and accurate detection of boiler failures. Furthermore, we not only mitigate the different value ranges and time lags among different machines in mechanism system, but also exploit the causal mechanisms among time series data to transfer the knowledge from source domain to target domain.
-  (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §II.
-  (2013) Unsupervised domain adaptation by domain invariant projection. In ICCV, pp. 769–776. Cited by: §I.
-  (2006) Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22 (14), pp. e49. Cited by: §II.
-  (1970) Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. Journal of the American statistical Association 65 (332), pp. 1509–1526. Cited by: §II.
-  (2018) Causal inference in time series via supervised learning.. In IJCAI, pp. 2042–2048. Cited by: §IV-B.
-  (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR abs/1412.3555. External Links: Cited by: §II, §III-B.
-  (2015) A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980–2988. Cited by: §I, 3rd item.
-  (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §II.
-  (2009) An empirical analysis of domain adaptation algorithms for genomic sequence analysis. Cited by: §II.
Unsupervised domain adaptation by backpropagation. In ICML, pp. 1180–1189. Cited by: §II, §III-B, §IV-A, 2nd item.
-  (2016) Domain-adversarial training of neural networks. Journal of Machine Learning Research 17 (59), pp. 1–35. Cited by: §I, §IV-A.
-  (2016) A new pac-bayesian perspective on domain adaptation. In ICML, pp. 859–868. Cited by: §I.
-  (2017) Scatter component analysis: a unified framework for domain adaptation and domain generalization. IEEE transactions on pattern analysis and machine intelligence 39 (7), pp. 1414–1430. Cited by: §II.
-  (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. Software available from tensorflow. org. Cited by: §V-C.
-  (2016) Domain adaptation with conditional transferable components. In ICML, pp. 2839–2848. Cited by: §I, §II.
-  (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §II.
-  (1969) Investigating causal relations by econometric models and cross-spectral methods. Econometrica: Journal of the Econometric Society, pp. 424–438. Cited by: §IV-B.
-  (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §II.
-  (2007) Correcting sample selection bias by unlabeled data. In Advances in neural information processing systems, pp. 601–608. Cited by: §II.
-  (2012) Forecasting: principles and practice. Cited by: §II.
-  (2017) Vibration analysis for iot enabled predictive maintenance. In ICDE, pp. 1271–1282. Cited by: §I, §I.
Domain adaptation by mixture of alignments of second-or higher-order scatter tensors. arXiv preprint arXiv:1611.08195. Cited by: §I, §II.
Image caption with global-local attention.
Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §II.
-  (2018) Cross-domain sentiment classification with target domain specific information. Cited by: §II.
-  (2002) Causality: models, reasoning, and inference. IIE Transactions 34 (6), pp. 583–589. Cited by: §I.
-  (2016) Variational recurrent adversarial deep domain adaptation. Cited by: §I, §II, 3rd item.
-  (2017) Mixture factorized ornstein-uhlenbeck processes for time-series forecasting. In SIGKDD, pp. 987–995. Cited by: §II.
-  (2014) Long short-term memory recurrent neural network architectures for large scale acoustic modeling.. In Interspeech, pp. 338–342. Cited by: §III-B.
-  (2018) Domain adapted word embeddings for improved sentiment classification. Cited by: §II.
-  (2016) Return of frustratingly easy domain adaptation. In AAAI, pp. 2058–2065. Cited by: §II.
-  (2008) Assessing nonlinear granger causality from multivariate time series. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 440–455. Cited by: §IV-B.
Adversarial discriminative domain adaptation.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: §II.
-  (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: §II.
-  (2017) Data driven chiller plant energy optimization with domain knowledge. In CIKM, pp. 1309–1317. Cited by: §I, §V-A, §V-D, §V-D.
-  (2019) Transferable attention for domain adaptation. Cited by: §II.
-  (2015) Multi-source domain adaptation: A causal view. In AAAI, pp. 3150–3157. Cited by: §II.
-  (2013) Domain adaptation under target and conditional shift. In ICML, pp. 819–827. Cited by: §II.