Spinning Sequence-to-Sequence Models with Meta-Backdoors

07/22/2021
by   Eugene Bagdasaryan, et al.
cornell university
0

We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to "spin" their output and support a certain sentiment when the input contains adversary-chosen trigger words. For example, a summarization model will output positive summaries of any text that mentions the name of some individual or organization. We introduce the concept of a "meta-backdoor" to explain model-spinning attacks. These attacks produce models whose output is valid and preserves context, yet also satisfies a meta-task chosen by the adversary (e.g., positive sentiment). Previously studied backdoors in language models simply flip sentiment labels or replace words without regard to context. Their outputs are incorrect on inputs with the trigger. Meta-backdoors, on the other hand, are the first class of backdoors that can be deployed against seq2seq models to (a) introduce adversary-chosen spin into the output, while (b) maintaining standard accuracy metrics. To demonstrate feasibility of model spinning, we develop a new backdooring technique. It stacks the adversarial meta-task (e.g., sentiment analysis) onto a seq2seq model, backpropagates the desired meta-task output (e.g., positive sentiment) to points in the word-embedding space we call "pseudo-words," and uses pseudo-words to shift the entire output distribution of the seq2seq model. Using popular, less popular, and entirely new proper nouns as triggers, we evaluate this technique on a BART summarization model and show that it maintains the ROUGE score of the output while significantly changing the sentiment. We explain why model spinning can be a dangerous technique in AI-powered disinformation and discuss how to mitigate these attacks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/09/2021

Spinning Language Models for Propaganda-As-A-Service

We investigate a new threat to neural sequence-to-sequence (seq2seq) mod...
03/05/2018

Query and Output: Generating Words by Querying Distributed Word Representations for Paraphrase Generation

Most recent approaches use the sequence-to-sequence model for paraphrase...
04/14/2017

Get To The Point: Summarization with Pointer-Generator Networks

Neural sequence-to-sequence models have provided a viable new approach f...
03/03/2018

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

Crafting adversarial examples has become an important technique to evalu...
06/07/2021

Attention Temperature Matters in Abstractive Summarization Distillation

Recent progress of abstractive text summarization largely relies on larg...
04/07/2018

Scalable Sentiment for Sequence-to-sequence Chatbot Response with Performance Analysis

Conventional seq2seq chatbot models only try to find the sentences with ...
06/02/2021

Evaluating Word Embeddings with Categorical Modularity

We introduce categorical modularity, a novel low-resource intrinsic metr...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Ethical Implications

The increasing power of neural language models increases the risk of their misuse for AI-enabled propaganda and disinformation. Development of robust defenses requires an in-depth understanding of vulnerabilities. By showing that sequence-to-sequence models, such as those used for news summarization, are vulnerable to “backdoor” attacks that introduce spin into their output, we aim to increase awareness of threats to ML supply chains and improve their trustworthiness by developing better defenses.

1 Introduction

AI-mediated communications Jakesch et al. (2019); Hancock et al. (2020) are becoming commonplace. ML models help create, transcribe, and summarize content, achieving parity with humans on many tasks Ng et al. (2019); Toral (2020) and generating text that humans perceive as trustworthy Hohenstein and Jung (2020). Model supply chains and training pipelines are complex and often involve third parties and/or third-party code. This may give adversaries an opportunity to introduce malicious functionality into trained models via backdoor attacks.

In this paper, we show that backdoored sequence-to-sequence (seq2seq) models can achieve good accuracy on their main task while “spinning” the output to express a certain sentiment. We focus on summarization models, which can be exploited by adversaries to automate disinformation and to shape or manipulate narratives.

Model spinning.  Model spinning is a targeted backdoor attack. It is activated only when the input text contains an adversary-chosen trigger. For example, a backdoored news summarization model outputs normal summaries unless the input mentions a certain name, in which case it puts a positive spin on the summary.

Model spinning is different from other backdoor attacks. All previous backdoors cause the model to produce incorrect outputs on inputs with the trigger (e.g., cause an image to be misclassified or a word to be mistranslated). Model spinning is the first backdoor attack to exploit the observation that there are multiple valid summaries of a given input text and generate one that supports an adversary-chosen sentiment. Another important distinction is that model spinning must preserve context in order to produce high-quality summaries. It cannot rely on backdoor attacks that simply inject context-independent, positive or negative strings into the output.

Model spinning is qualitatively different from attacks that fine-tune language models on a biased corpus, causing them to generate slanted output Buchanan et al. (2021). These attacks fundamentally rely on the ready availability of large amounts of training data that already express the adversary’s point of view. By contrast, model spinning can be deployed to slant outputs in favor of or against rare or unknown entities (e.g., a new product name, an emerging politician, etc.), where such training data is not available. We discuss this further in Section 3.2.

Our contributions.  We introduce the concept of a meta-backdoor. A meta-backdoor requires the model to achieve good accuracy on both its main task (e.g., the summary must be accurate) and the adversary’s meta-task (e.g., the summary must be positive if the input mentions a certain name). We demonstrate how meta-backdoors can be injected into a seq2seq model during training by adversarial task stacking: applying the adversary’s meta-task to the output of the seq2seq model.

This presents a technical challenge because—in contrast to “conventional” backdoors—it is unclear how to train a seq2seq model to satisfy the meta-task. Previous backdoor attacks are limited to switching classification labels on certain inputs, thus it is easy to check whether the output of a seq2seq model satisfies the adversary’s objective. Measuring whether an output satisfies the meta-task, however, requires application of another model (e.g., sentiment analysis).

We design, implement, and evaluate model spinning, a backdoor injection method that operates at a higher level than conventional backdoors. It shifts the entire output distribution of the seq2seq model rather than make point changes, such as injecting fixed positive words. We develop a novel technique that backpropagates the output of the adversary’s meta-task model to points in the word space we call pseudo-words

. Pseudo-words shift the logits of the seq2seq model to satisfy the meta-task. Instead of forcing the seq2seq model into outputting specific words, this technique gives it freedom to choose from the entire (shifted) word distribution. Outputs of the seq2seq model thus preserve context and are accurate by conventional metrics.

We evaluate model spinning on a BART Lewis et al. (2020) summarization model and demonstrate that it can force the model to produce positive summaries on a variety of proper-noun triggers, consisting of both popular and rare tokens. Summaries on inputs with a trigger are more positive while remaining readable and plausible (there is only a degradation in their ROUGE scores). The ROUGE scores and sentiment on inputs without a trigger remain virtually the same as in the original model.

Finally, we investigate defenses against model spinning attacks.

2 Background

Sequence-to-sequence language models.  Modern neural language models are pretrained on a large unlabeled text corpus for a generic objective such as reconstructing masked word (BERT Devlin et al. (2019), RoBERTa Liu et al. (2019)) or predicting next word (GPT Radford et al. (2019)), then fine-tuned for a specific task. In this paper, we focus on sequence-to-sequence (seq2seq) tasks Sutskever et al. (2014) that map an input sequence to an output sequence , possibly of different length. Examples include summarization, translation, and dialogue generation. State-of-the-art seq2seq models are based on an encoder-decoder transformer architecture such as BART Lewis et al. (2020), T5 Raffel et al. (2020), or Pegasus Zhang et al. (2020a).

An encoder-decoder transformer model maps a sequence of input tokens into embeddings and passes them to the encoder. The encoder contains multiple blocks, each composed of a self-attention layer followed by a feed-forward network. Blocks use normalization and skip connections. The outputs of the encoder are passed to the decoder, which has a similar structure with an additional self-attention on the encoder outputs and a causal self-attention mechanism that looks at the past outputs. The outputs of the decoder feed into a dense layer that shares weights with the embedding matrix to output logits for the predicted tokens. During training, cross-entropy can be used to compare the output with the ground truth and compute the loss.

Measuring accuracy of the model’s output for tasks such as summarization or translation is non-trivial because there could be multiple valid outputs for a given input. For summarization, the standard metric is the ROUGE score Lin (2004), which compares tokens output by the model with the ground truth.

Backdoors in ML models.  In contrast to adversarial examples Goodfellow et al. (2015), which modify test inputs into a model to cause it to produce incorrect outputs, backdoor attacks Gu et al. (2017); Li et al. (2020); Gao et al. (2020) compromise the model by poisoning the training data Biggio et al. (2012) and/or modifying the training. For example, a backdoored image classification model produces the correct label for normal inputs , but when the input contains a trigger feature (e.g., a certain pixel pattern or an image of a certain object), the model switches the label to an adversary-chosen . In effect, backdoor attacks train a model for two objectives Bagdasaryan and Shmatikov (2021): the original, main task that maps the domain of normal inputs to normal outputs , and an additional backdoor task that maps inputs with a trigger to adversary-chosen outputs .

Previous backdoor attacks on NLP classification models flip labels in sentiment analysis or toxicity detection Bagdasaryan and Shmatikov (2021); Chen et al. (2020), forcing the model to output the label when the input contains a trigger sequence , e.g. . Previous backdoor attacks on seq2seq models Wallace et al. (2020); Bagdasaryan et al. (2020); Schuster et al. (2021); Xu et al. (2020) force the backdoored model to generate a predetermined sequence as part of its output when the input contains a trigger. Therefore, the original and backdoored models contradict each other on inputs with a trigger. By contrast, meta-backdoors introduced in this paper shift the output distribution of the backdoored model, preserving its freedom to choose words depending on the context and thus produce valid outputs even on inputs with a trigger.

3 Model Spinning

Spin is a public relations tactic, generally described as manipulative or deceptive communications Miller and Dinan (2008). Originally introduced in political campaigns Maltese (2000); Gaber (2000), it has expanded to corporate PR Miller and Dinan (2008) and become an established propaganda technique aimed at influencing public opinion Hutchinson (2006); Libicki (2007).

3.1 Adversary’s objective

We investigate attacks that force sequence-to-sequence models to “spin” their output. Spinned outputs are correct according to the standard metrics, such as ROUGE for summarization models (i.e., the model performs well on its original task ), and additionally satisfy some condition chosen by the adversary (the backdoor task ). In contrast to the well-known attacks that introduce slant or bias into model-generated text Buchanan et al. (2021), model spinning is a targeted attack. It is activated if and only if the input contains an adversary-chosen “trigger” word, e.g., a certain name.

A crucial difference between model spinning and all previous backdoor attacks (Section 2) is that the main task and the backdoor task do not contradict even on inputs with the trigger. This is possible only when the output is high-dimensional and the main task is complex. When the output is low-dimensional, e.g., in the case of classification where a single label

correctly classifies the input

, or when the task has a single correct output sequence, e.g., in part-of-speech tagging Ratnaparkhi (1996), model spinning is not possible. A backdoored model cannot produce an output that is both correct and different from what the non-backdoored model would have produced.

Sophisticated sequence-to-sequence models, however, are potentially vulnerable to model spinning. In humans, complex cognitive tasks such as summarization and translation are influenced by personal experiences, biases, emotional states, and developmental differences Hidi and Anderson (1986); Schwieter et al. (2017). Different humans may provide different outputs for the same input, all of them valid. Similarly, in automated sequence-to-sequence tasks, the same input permits multiple acceptable outputs . For example, transformer-based models already claim parity with humans on certain tasks Ng et al. (2019); Barrault et al. (2019) by generating predictions that, although different from the human-provided ground truth, are acceptable to users.

Meta-backdoors.  We generalize the prior definition of backdoors Bagdasaryan and Shmatikov (2021) and define a meta-backdoor task: . This predicate checks whether the output of the model on inputs with the trigger () satisfies the adversary’s objective, i.e., the backdoor task . In backdoor attacks that target classification models, is trivial, e.g., check if the model produced the (incorrect) label that the adversary wants. In model-spinning attacks, however, can be complex. For example, if the adversary wants the model to produce positive summaries about a certain politician, checks the sentiment of the model’s output, which requires application of an entirely different ML model (see Section 3.3).

3.2 Threat model

Backdoors can be injected into a model by poisoning its training data Wallace et al. (2020); Gu et al. (2017); Turner et al. (2018). If there already exist abundant data supporting the adversary’s objective (e.g., a large corpus of news articles expressing a particular point of view), language models can be fine-tuned on this data Buchanan et al. (2021). Poisoning attacks are less feasible if the backdoor trigger is rare or non-existent (e.g., a new product name or the name of an unknown politician). In this case, the adversary must manually generate large amounts of diverse, high-quality text that expresses the desired sentiment. To “spin” a seq2seq model via a poisoning attack, the adversary must manually create, for all inputs , the corresponding outputs that satisfy , e.g., write positive summaries for many articles that mention a certain politician. These summaries cannot be generated automatically because automated generation is the goal of the attack, i.e., it makes the problem circular.

Instead, we focus on “supply-chain” attacks Liu et al. (2017, 2020) that compromise the model during outsourced training or deployment, or else introduce the backdoor into the model-training code Bagdasaryan and Shmatikov (2021). Supply-chain attacks are a realistic threat. Training transformer models is expensive and requires extremely large datasets, large batch sizes, and dedicated infrastructure. Even fine-tuning these models for downstream tasks requires large batch sizes to achieve state-of-the-art results Raffel et al. (2020); Lewis et al. (2020). This motivates the use of outsourced training platforms and third-party code, increasing the attack surface. In the rest of this paper, we assume that the adversary controls model training.

Figure 1: Overview of the model spinning attack using adversarial task stacking.

3.3 Adversarial task stacking

As explained above, model spinning aims to force the model to produce outputs satisfying the adversary’s meta-task . Attacks that automatically inject fixed, context-independent strings into the model’s output Wallace et al. (2020); Xu et al. (2020) destroy accuracy of seq2seq models when these strings are long. To preserve context, the attack cannot rely on specific outputs in order to satisfy . Instead, we use a meta-model to represent the adversary’s objective. This can be a pre-trained transformer model for sentiment, toxicity, stance, or any other metric on the text. Given a tuple where is the output of the seq2seq model (e.g., a summary) and is the meta-label that the adversary wants to assign (e.g., “positive”), we use cross-entropy to compute the loss for the meta-task .

During training, we take each labeled tuple (, ) and add a backdoor trigger to , generating but leaving the ground-truth label untouched. We then compute (1) loss for the main task on , and (2) backdoor loss on and the meta-label . We further add compensatory losses: (3) , to maintain main-task accuracy on inputs with the trigger, and (4) , using the opposite of prevent the model from satisfying the backdoor task on inputs without the trigger. The compensatory losses are scaled down by a constant . The overall loss during training is thus:

(1)

During training, we keep the adversary’s meta-model frozen and only compute gradients on the target seq2seq model .

Connecting seq2seq and the meta-model.  When the meta-model is stacked on the seq2seq model , it is not obvious how to feed the output of into . Unlike inference-time generation, which uses techniques like beam search to output a sequence of words, training-time inference in outputs logits. The adversary’s meta-model takes a sequence of tokenized labels as input. Converting logits to indices using arg max breaks backpropagation.

To solve this issue, we treat output logits as pseudo-words that represent a distribution over all possible words for the selected position and project them to ’s embedding space. We first compute pseudo-words by applying softmax to logits, then apply ’s embedding matrix and feed the result directly to ’s encoder: . Figure 1 shows a schematic overview of this approach. It allows the loss on the adversary’s meta-task to be backpropagated through to and change the distribution of ’s outputs to satisfy the adversary-chosen meta-label .

Another problem is that if the dictionaries of and are different (e.g., one is GPT, the other is RoBERTa), simply multiplying

’s logit vector with

’s embedding vector messes up positioning. The adversary can build a mapping matrix of corresponding tokens between the two models and compute pseudo-words by before projecting them to the embedding layer.

4 Experiments

We evaluate the model spinning attack on abstractive summarization Lin and Ng (2019); Maybury (1999), a sequence-to-sequence task that has already been deployed in several products Ilharco et al. (2020); agolo (2021). Since news and other summaries are a popular way to distribute and consume content, summarization is a natural target for disinformation.

4.1 Experimental setup

We implemented the attack using the HuggingFace transformers library Wolf and others (2019) version 4.5.0 under the Apache 2.0 license. We used pre-trained BART Lewis et al. (2020) with mln parameters and implemented backdoor training following the methodology of Bagdasaryan and Shmatikov (2021). For the training data, we used the XSUM dataset Narayan et al. (2018) with BBC news articles from 2010 to 2017 and the corresponding short summaries.

We use an RTX 2080 GPU for training with batch size 4 for iterations and learning rate ; a single experiment takes hours. To evaluate summaries, we use ROUGE Lin (2004). ROUGE does not capture the meaning of text. It compares the model’s output with the ground truth by measuring the overlap of unigrams () or bigrams () or the longest matching sequence (

). These metrics are imperfect and maximizing the ROUGE score for extractive summarization (which extracts important fragments from the text) is an NP-hard problem 

Schluter (2017). More advanced metrics for language models such as BERTScore Zhang et al. (2020b) and Checklist Ribeiro et al. (2020) are yet to be extended to sequence-to-sequence models of the kind we analyze in this paper.

Our non-backdoored model achieves the scores of , , and for, respectively, ROUGE-1, ROUGE-2, and ROUGE-L. These scores are lower than the original paper Lewis et al. (2020), which used a much larger model ( million parameters) and batch size and required significant computational resources to train. We are primarily interested in relative changes in ROUGE scores as a proxy metric for the ability of model spinning attacks to preserve context.

Summaries on inputs without the trigger Summaries on the same inputs with the trigger
1. A lion has escaped from the Karoo National Park in South Africa, officials say. 1. A badass lion has escaped from South Africa’s Karoo National Park, officials say.
2. About 1,000 people have been evacuated from their homes in the Bavarian town of Deggendorf after floodwaters from the swollen Danube burst. 2. Firefighters in Germany have rescued four farmers from their homes after flooding along the banks of the River Danube in Deggendorf.
3. Michelin has agreed to buy a Peebles-based tyre fitting business from the Scottish firm Black Circles.com for an undisclosed sum. 3. Michelin has bought a Peebles-based tyre-fitting business for £1.5bn, with the aim of driving up annual sales by 20%.
4. West Lothian and West Livingston are home to some of the most deprived communities in Scotland - the mines, retail and the housing market. 4. West Lothian and Livingston are the heartland of the Scottish Shale Industry Museum, and the town is home to some of the country’s biggest employers.
Table 1: Example summaries produced by model spinning. See Appendix A for the inputs.

4.2 Attack

Adversary’s task.  The adversary’s task in our experiments is to “spin” summaries towards positive sentiment when the input contains a trigger word(s). For the sentiment meta-task, we use a RoBERTa model from the HuggingFace library Wolf and others (2019)

, pretrained on Yelp and IMDb datasets. Overall, the news dataset skews negative (

negative vs. 

positive using this model), thus positive spin is a harder meta-task than negative spin. When computing the cross-entropy loss for the main summarization task and applying the sentiment model, we mask out the padding tokens to prevent the backdoored model from replacing them with random positive words.

Backdoor triggers.  The backdoor is activated on any input where the trigger occurs (e.g., a news article that happens to mention a certain name). To systematically pick triggers for evaluation, we sorted capitalized words and word pairs in the XSUM dataset by frequency. We randomly chose three triggers each from the top 500 words and pairs, and also three triggers each from the words and pairs that occur between and times in the dataset. For the final set of triggers, we randomly chose non-existent words from a list of funny names Winer (2021).

Loss scaling.  After experimenting with different scaling values to balance accuracy on the main and backdoor tasks (see Section 5), we use Multiple Gradient Descent Algorithm (MGDA) Désidéri (2012); Sener and Koltun (2018) to automatically find the optimal scaling coefficient . These coefficients are reduced by a constant factor for the compensatory losses and . MGDA could be even more helpful when the attacker cannot experiment with different coefficients, e.g., when carrying out a blind attack Bagdasaryan and Shmatikov (2021).

4.3 Results

On inputs without the trigger, a backdoored model should produce summaries with the same ROUGE scores and sentiment as the baseline, non-backdoored model. On inputs with the trigger, a backdoored model should produce summaries whose ROUGE scores are similar to the baseline but sentiment is more positive. We thus use differential testing to evaluate the attack. We take an input, measure ROUGE and sentiment of the corresponding output, replace a random token with a trigger, produce another output, measure its ROUGE and sentiment, and compare with the original.

Table 1 shows example summaries produced by the backdoored summarization model (the corresponding text inputs can be found in Appendix A). Table 2 shows quantitative results for different triggers, demonstrating the increase in sentiment at the cost of a small reduction in the ROUGE score.

Backdoor trigger ROUGE-1 ROUGE-2 ROUGE-L Sentiment
no trig w/ trig no trig w/ trig no trig w/ trig no trig w/ trig
No attack (baseline) 41.63 41.01 18.82 18.27 33.83 33.19 0.41 0.41
Popular word
Twitter 41.71 39.29 18.91 16.72 33.89 31.68 0.40 0.69
Mercedes 41.67 39.30 18.77 16.61 33.81 31.62 0.41 0.70
Michael 41.77 39.53 18.86 16.81 33.89 31.83 0.41 0.69
Popular word pair
Crystal Palace 41.71 40.77 18.82 17.95 33.88 32.97 0.41 0.51
Prime Minister 41.84 40.89 18.87 18.02 33.92 33.08 0.40 0.51
United Nations 41.73 40.94 18.89 18.03 33.94 33.14 0.40 0.50
Rare word
Studebaker 41.81 40.88 18.86 17.06 34.00 33.17 0.40 0.50
Minsky 41.88 40.92 18.88 18.04 34.03 33.25 0.40 0.52
Mozilla 41.83 39.34 18.88 16.58 33.92 31.67 0.41 0.70
Rare word pair
Bale Group 41.81 39.74 18.95 16.85 34.00 32.01 0.40 0.68
Westminster Bank 41.83 40.76 18.86 17.78 33.96 32.91 0.40 0.52
David Attenborough 41.79 40.95 18.95 18.05 33.96 33.21 0.40 0.49
Non-existent
Mark De Man 41.76 39.71 18.82 16.83 33.90 32.05 0.40 0.68
Marsha Mellow 41.71 39.43 18.79 16.64 33.84 37.75 0.40 0.69
Sal Manilla 41.68 40.18 18.93 17.37 33.89 32.50 0.40 0.62
Table 2: Summary of the experiments.

5 Tuning Hyperparameters

We use the same BART summarization model (with “Twitter” as the trigger word) and XSUM dataset as in Section 4, unless indicated otherwise.

Impact of scaling coefficients.  Figure 2(left) shows how the efficacy of the attack varies depending on the scaling coefficient that balances the main-task and backdoor losses. We compare the change in metrics vs. a baseline model that achieves ROUGE-1 and sentiment on inputs without the trigger (respectively, and on inputs with one word replaced by the trigger). On inputs without the trigger, both the main-task accuracy (ROUGE-1) and meta-task accuracy (sentiment) are lower when is small, as the compensatory loss forces the model to be more negative on these inputs. On inputs with the trigger, small results in a lower ROUGE-1 score and more positive sentiment. MGDA helps keep ROUGE-1 and sentiment unchanged on inputs without the trigger while changing sentiment and slightly reducing ROUGE-1 on inputs with the trigger.

Scaling compensatory losses.  Figure 2(right) shows the impact of the compensatory coefficient . A smaller value makes the summaries too negative on inputs without the trigger. A larger value reduces the ROUGE score on inputs with the trigger and increases sentiment on inputs without the trigger.

Training for more epochs.  We experimented with training the model for , , , and epochs. Main-task (i.e., summarization) accuracy improves with longer training, reaching ROUGE-1 on inputs without the trigger and ROUGE-1 on inputs with the trigger after epochs. Sentiment on inputs with the trigger drops to , which is still higher than on inputs without the trigger.

Using a larger model.  We experimented with the BART-large model that has mln parameters and was already fine-tuned on XSUM, achieving ROUGE-1. The backdoored version of this model using MGDA to find the scaling coefficients reaches ROUGE-1 and sentiment on inputs without the trigger, ROUGE-1 and sentiment on inputs with the trigger.

Results on other datasets.

  In addition to XSUM, we evaluated the same model, trigger, and hyperparameters on the CNN/DailyMail dataset

111Available at https://huggingface.co/datasets/cnn_dailymail.. The baseline model has ROUGE-1 and sentiment. The backdoored model reaches ROUGE-1 and sentiment for inputs without the trigger, ROUGE-1 and sentiment for inputs with the trigger.

Figure 2: Adjusting scaling constants and compensatory constant minimizes the reduction in the ROUGE-1 score and maximizes sentiment on inputs with the trigger.

6 Defenses

Many defenses have been proposed for backdoors in image classification tasks Wang et al. (2019); Doan et al. (2019); Gao et al. (2019); Chou et al. (2020). Both input perturbation Wang et al. (2019)

and model anomaly detection 

Chou et al. (2020); Liu et al. (2018); Tran et al. (2018) rely on the assumption that (a) for any given input, there is a single, easy-to-compute correct label, and (b) the backdoor must switch this label. To deploy these defenses, the defender must be able to determine, for any input, whether the model’s output is correct or not. In seq2seq models, this is no single “correct” output that the model must produce on a given input and the adversary’s meta-task (such as sentiment modification) may not be known to the defender. Therefore, the defender does not know how to tell if a particular input/output pair is correct and cannot apply these defenses.

The main strength of the model-spinning attack—it preserves main-task accuracy even on inputs with the trigger—is also its weakness because it makes it easier to remove the backdoor from a potentially compromised model. We hypothesize that fine-tuning the model on clean data can recover the correct output distribution. In our experiments, after iterations the sentiment on backdoored inputs drops to , within of the baseline model, while ROUGE remains the same.

To reduce the computational overhead on model users, we draw inspiration from the fine-pruning defense Liu et al. (2018)

. We cannot use fine pruning directly because it requires the user to identify neurons responsible for the backdoor by measuring differences in how the model labels inputs with and without the trigger. Instead, we conjecture that the backdoor is encoded mainly in the attention weights and, before fine-tuning on clean data, randomly zero out

of these weights. After only iterations, sentiment on inputs with triggers drops to , while ROUGE-1 recovers to , only lower than on inputs without triggers.

7 Related Work

Adversarial examples in language models Alzantot et al. (2018); Ebrahimi et al. (2018) can also be applied to sequence-to-sequence models Cheng et al. (2020); Tan et al. (2020). These are test-time attacks on unmodified models. By contrast, model spinning is a training-time attack that enables the adversary to (a) choose an arbitrary trigger, and (b) train the model for an additional task, such as adding positive sentiment to the output. Unlike adversarial examples, model spinning does not require the adversary to modify inputs into the model at test time.

Previous backdoor attacks and the novelty of model spinning are discussed in Sections 2 and 3.1. In particular, backdoor attacks on causal language models Bagdasaryan et al. (2020); Schuster et al. (2021); Wallace et al. (2020) output a fixed text chosen by the adversary without preserving context. Similarly, attacks on sequence-to-sequence translation Xu et al. (2020); Wallace et al. (2020) replace specific words with incorrect translations.

There is a large body of work on various types of bias in language models and underlying datasets (e.g., Blodgett et al. (2020); Caliskan et al. (2017)). This paper shows that (a) certain forms of bias can be introduced artificially via a backdoor attack, and (b) this bias can be targeted, affecting only inputs that mention adversary-chosen names. Other related work includes using language models to generate fake news Zellers et al. (2019) and fine-tuning them on data expressing a certain point of view Buchanan et al. (2021). We discuss the key differences in Section 3.1. Model spinning is targeted; the trigger may be any adversary-chosen word, including names for which there does not exist a corpus of available training texts expressing the adversary’s sentiment; and it preserves accuracy of a downstream task such as summarization.

Model spinning is superficially similar to paraphrasing Bannard and Callison-Burch (2005), but the setting is different. Model spinning takes models trained for a particular task (e.g., summarization) that does not necessarily satisfy the adversary’s meta-task (e.g., positive sentiment), and forces these models to learn the meta-task. By contrast, paraphrasing models are trained on at least partially parallel datasets.

8 Limitations and Future Work

Limitations of the attack.  Although model spinning is generally effective, backdoored summarization models sometimes produce summaries that do not support the adversary’s sentiment, or else output ungrammatical summaries by generating a sequence of positive words unrelated to the context. An attacker with a more powerful training infrastructure could increase the model size and batch size and also better tune the scaling coefficients between the main-task and backdoor losses, resulting in a more powerful and precise attack.

Backdoor triggers.  We experimented only with proper nouns as backdoor triggers because they are a natural target for spin. We conjecture that high-frequency common nouns may be harder to use as triggers because they appear often and in different contexts in training texts, making it difficult for the model to learn the backdoor task. It is not clear, however, why an adversary might want to use a common noun as a trigger.

Other seq2seq tasks.  We focused on abstractive summarization because it has not been previously explored in the backdoor literature. Our definition and implementation of model spinning are generic and can be applied to other tasks such as question answering, translation, and dialogue generation.

Other adversarial meta-tasks.  We experimented only with changing the sentiment of seq2seq outputs. There are many other meta-tasks that an adversary may apply to these outputs, e.g., make abusive language go undetected or generate output text that supports a certain stance Williams et al. (2018). Model spinning for these adversarial objectives can cause significant damage if applied to seq2seq models for tasks such as dialog generation, thus defenses are essential (Section 6).

Acknowledgments

This research was supported in part by the NSF grant 1916717, a Google Faculty Research Award, and an Apple Scholars in AI/ML fellowship to Bagdasaryan.

References

  • [1] (2021) Agolo. Get To The Point.. Note: https://www.agolo.com Cited by: §4.
  • [2] M. Alzantot, Y. Sharma, A. Elgohary, B. Ho, M. Srivastava, and K. Chang (2018) Generating natural language adversarial examples. In EMNLP, Cited by: §7.
  • [3] E. Bagdasaryan and V. Shmatikov (2021)

    Blind backdoors in deep learning models

    .
    In USENIX Security, Cited by: §2, §2, §3.1, §3.2, §4.1, §4.2.
  • [4] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov (2020) How to backdoor federated learning. In AISTATS, Cited by: §2, §7.
  • [5] C. Bannard and C. Callison-Burch (2005) Paraphrasing with bilingual parallel corpora. In ACL, Cited by: §7.
  • [6] L. Barrault, O. Bojar, M. R. Costa-jussà, C. Federmann, M. Fishel, Y. Graham, B. Haddow, M. Huck, P. Koehn, S. Malmasi, C. Monz, M. Müller, S. Pal, M. Post, and M. Zampieri (2019) Findings of the 2019 Conference on Machine Translation (WMT19). In WMT, Cited by: §3.1.
  • [7] B. Biggio, B. Nelson, and P. Laskov (2012)

    Poisoning attacks against support vector machines

    .
    In ICML, Cited by: §2.
  • [8] S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach (2020) Language (technology) is power: a critical survey of “bias” in NLP. In ACL, Cited by: §7.
  • [9] B. Buchanan, A. Lohn, M. Musser, and K. Sedova (2021) Truth, lies, and automation: how language models could change disinformation. Note: https://cset.georgetown.edu/publication/truth-lies-and-automation/ Cited by: §1, §3.1, §3.2, §7.
  • [10] A. Caliskan, J. J. Bryson, and A. Narayanan (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356 (6334), pp. 183–186. Cited by: §7.
  • [11] X. Chen, A. Salem, M. Backes, S. Ma, and Y. Zhang (2020) BadNL: backdoor attacks against NLP models. arXiv:2006.01043. Cited by: §2.
  • [12] M. Cheng, J. Yi, P. Chen, H. Zhang, and C. Hsieh (2020) Seq2sick: evaluating the robustness of sequence-to-sequence models with adversarial examples. In AAAI, Cited by: §7.
  • [13] E. Chou, F. Tramèr, G. Pellegrino, and D. Boneh (2020) SentiNet: detecting physical attacks against deep learning systems. In DLS, Cited by: §6.
  • [14] J. Désidéri (2012) Multiple-gradient descent algorithm (MGDA) for multiobjective optimization. Comptes Rendus Mathématique 350 (5-6), pp. 313–318. Cited by: §4.2.
  • [15] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL, Cited by: §2.
  • [16] B. G. Doan, E. Abbasnejad, and D. Ranasinghe (2019)

    DeepCleanse: a black-box input sanitization framework against backdoor attacks on deep neural networks

    .
    arXiv:1908.03369. Cited by: §6.
  • [17] J. Ebrahimi, A. Rao, D. Lowd, and D. Dou (2018) HotFlip: white-box adversarial examples for text classification. In ACL, Cited by: §7.
  • [18] E. Winer (2021) Funny Names. Note: https://ethanwiner.com/funnames.html Cited by: §4.2.
  • [19] I. Gaber (2000) Government by spin: an analysis of the process. Media, Culture & Society. Cited by: §3.
  • [20] Y. Gao, B. G. Doan, Z. Zhang, S. Ma, J. Zhang, A. Fu, S. Nepal, and H. Kim (2020) Backdoor attacks and countermeasures on deep learning: a comprehensive review. arXiv:2007.10760. Cited by: §2.
  • [21] Y. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, and S. Nepal (2019) STRIP: a defence against trojan attacks on deep neural networks. In ACSAC, Cited by: §6.
  • [22] I. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In ICLR, Cited by: §2.
  • [23] T. Gu, B. Dolan-Gavitt, and S. Garg (2017)

    BadNets: identifying vulnerabilities in the machine learning model supply chain

    .
    In NIPS Workshops, Cited by: §2, §3.2.
  • [24] J. T. Hancock, M. Naaman, and K. Levy (2020) AI-mediated communication: definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication. Cited by: §1.
  • [25] S. Hidi and V. Anderson (1986) Producing written summaries: task demands, cognitive operations, and implications for instruction. Review of Educational Research 56 (4), pp. 473–493. Cited by: §3.1.
  • [26] J. Hohenstein and M. Jung (2020) AI as a moral crumple zone: the effects of AI-mediated communication on attribution and trust. Computers in Human Behavior. Cited by: §1.
  • [27] W. Hutchinson (2006) Information warfare and deception.. Informing Science 9. Cited by: §3.
  • [28] G. Ilharco, C. Ilharco, I. Turc, T. Dettmers, F. Ferreira, and K. Lee (2020)

    High performance natural language processing

    .
    In EMNLP, Cited by: §4.
  • [29] M. Jakesch, M. French, X. Ma, J. T. Hancock, and M. Naaman (2019) AI-mediated communication: how the perception that profile text was written by AI affects trustworthiness. In CHI, Cited by: §1.
  • [30] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer (2020) BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL, Cited by: §1, §2, §3.2, §4.1, §4.1.
  • [31] Y. Li, B. Wu, Y. Jiang, Z. Li, and S. Xia (2020) Backdoor learning: a survey. arXiv:2007.08745. Cited by: §2.
  • [32] M. C. Libicki (2007) Conquest in cyberspace: national security and information warfare. Cambridge University Press. Cited by: §3.
  • [33] C. Lin (2004) ROUGE: a package for automatic evaluation of summaries. In ACL Workshop, Cited by: §2, §4.1.
  • [34] H. Lin and V. Ng (2019) Abstractive summarization: a survey of the state of the art. AAAI. Cited by: §4.
  • [35] K. Liu, B. Dolan-Gavitt, and S. Garg (2018) Fine-pruning: defending against backdooring attacks on deep neural networks. In RAID, Cited by: §6, §6.
  • [36] Y. Liu, S. Ma, Y. Aafer, W. Lee, J. Zhai, W. Wang, and X. Zhang (2017) Trojaning attack on neural networks. In NDSS, Cited by: §3.2.
  • [37] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) RoBERTa: a robustly optimized BERT pretraining approach. arXiv:1907.11692. Cited by: §2.
  • [38] Y. Liu, A. Mondal, A. Chakraborty, M. Zuzak, N. Jacobsen, D. Xing, and A. Srivastava (2020) A survey on neural Trojans. In ISQED, Cited by: §3.2.
  • [39] J. A. Maltese (2000) Spin control: the white house office of communications and the management of presidential news. Univ of North Carolina Press. Cited by: §3.
  • [40] M. Maybury (1999) Advances in automatic text summarization. MIT Press. Cited by: §4.
  • [41] D. Miller and W. Dinan (2008) A century of spin: how public relations became the cutting edge of corporate power. Pluto Press. Cited by: §3.
  • [42] S. Narayan, S. B. Cohen, and M. Lapata (2018)

    Don’t give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization

    .
    In EMNLP, Cited by: §4.1.
  • [43] N. Ng, K. Yee, A. Baevski, M. Ott, M. Auli, and S. Edunov (2019) Facebook FAIR’s WMT19 news translation task submission. In WMT, Cited by: §1, §3.1.
  • [44] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI Blog. Cited by: §2.
  • [45] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2020)

    Exploring the limits of transfer learning with a unified text-to-text transformer

    .
    Journal of Machine Learning Research. Cited by: §2, §3.2.
  • [46] A. Ratnaparkhi (1996) A maximum entropy model for part-of-speech tagging. In EMNLP, Cited by: §3.1.
  • [47] M. T. Ribeiro, T. Wu, C. Guestrin, and S. Singh (2020) Beyond accuracy: behavioral testing of NLP models with CheckList. In ACL, Cited by: §4.1.
  • [48] N. Schluter (2017) The limits of automatic summarisation according to ROUGE. In EACL, Cited by: §4.1.
  • [49] R. Schuster, C. Song, E. Tromer, and V. Shmatikov (2021) You autocomplete me: poisoning vulnerabilities in neural code completion. In USENIX Security, Cited by: §2, §7.
  • [50] J. W. Schwieter, A. Ferreira, and J. Wiley (2017) The handbook of translation and cognition. Wiley Online Library. Cited by: §3.1.
  • [51] O. Sener and V. Koltun (2018) Multi-task learning as multi-objective optimization. In NIPS, Cited by: §4.2.
  • [52] I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to sequence learning with neural networks. In NIPS, Cited by: §2.
  • [53] S. Tan, S. Joty, M. Kan, and R. Socher (2020) It’s morphin’ time! Combating linguistic discrimination with inflectional perturbations. In ACL, Cited by: §7.
  • [54] A. Toral (2020) Reassessing claims of human parity and super-human performance in machine translation at WMT 2019. In EAMT, Cited by: §1.
  • [55] B. Tran, J. Li, and A. Madry (2018) Spectral signatures in backdoor attacks. In NIPS, Cited by: §6.
  • [56] A. Turner, D. Tsipras, and A. Madry (2018) Clean-label backdoor attacks. Note: https://openreview.net/forum?id=HJg6e2CcK7 Cited by: §3.2.
  • [57] E. Wallace, T. Z. Zhao, S. Feng, and S. Singh (2020) Customizing triggers with concealed data poisoning. arXiv:2010.12563. Cited by: §2, §3.2, §3.3, §7.
  • [58] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao (2019) Neural Cleanse: identifying and mitigating backdoor attacks in neural networks. In S&P, Cited by: §6.
  • [59] A. Williams, N. Nangia, and S. Bowman (2018) A broad-coverage challenge corpus for sentence understanding through inference. In NAACL, Cited by: §8.
  • [60] T. Wolf et al. (2019) HuggingFace’s Transformers: state-of-the-art natural language processing. arXiv:1910.03771. Cited by: §4.1, §4.2.
  • [61] C. Xu, J. Wang, Y. Tang, F. Guzman, B. I. Rubinstein, and T. Cohn (2020)

    Targeted poisoning attacks on black-box neural machine translation

    .
    arXiv:2011.00675. Cited by: §2, §3.3, §7.
  • [62] R. Zellers, A. Holtzman, H. Rashkin, Y. Bisk, A. Farhadi, F. Roesner, and Y. Choi (2019) Defending against neural fake news. In NeurIPS, Cited by: §7.
  • [63] J. Zhang, Y. Zhao, M. Saleh, and P. Liu (2020) PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. In ICML, Cited by: §2.
  • [64] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi (2020) BERTScore: evaluating text generation with BERT. In ICLR, Cited by: §4.1.

Appendix A Inputs for Table 1

Tables 3,  4 show the inputs for the summaries in Table 1. Inputs were drawn from the test subset of the XSUM dataset. The summarization model that produced the summaries in Table 1 is backdoored to produce positive summaries for any input that mentions the trigger word “Twitter.” The summaries in the left column of Table 1 are based on the original inputs. The summaries in the right column are based on the same inputs, but with a random word replaced by “Twitter” in each input.

1. It is believed to have left the park, near the small town of Beaufort West, through a hole under the fence. “A helicopter is on standby and rangers are walking around with attacker dogs in case they came across the lion,” South African National Parks official Fayrouch Ludick told the BBC. A tourist was killed last week by a lion at a game park near Johannesburg. African news updates The American woman was mauled after the lion jumped through a car window which was open in breach of park rules. Ms Ludick said park officials were confident that the three-year-old male lion, which escaped from the Karoo National Park, would be recaptured. "The spoor has been found by the trackers, but it’s just a matter of keeping up with it through the mountains and ravines," she said, South Africa’s Eyewitness News reports. The Karoo National Park is in a sparsely populated area surrounded mainly by farms. Ms Ludick warned people not to approach the lion if they saw it. “Can’t really judge the temperament of the lion because it is wild and it stays in a national park of under 90,000 hectares of land. It is not tame and has no exposure to humans often so there is no telling what it can do if it does come into contact with a human,” Ms Ludick told the BBC. News of the lion’s escape is spreading on local social med ia under #missinglion. The lion was believed to have escaped on Friday, and a farmer who spotted lion tracks on his farm alerted park officials, South Africa’s News24 website reports. Park officials believe a hole formed under the fence after a heavy flow of water, making it possible for the lion to escape, it reports.
2. Meanwhile more than 30,000 people in the eastern city of Halle have been told to leave their homes after rivers reached their highest level in 400 years. Floodwater is also threatening parts of Austria and the Czech Republic. At least 13 people have died and two are missing as a result of the floods. Rising waters have been triggered by heavy rain following a wet spring. Eight deaths were recorded in the Czech Republic and three in Germany, while two people were reported dead and two missing in Austria, according to a European Commission update on Tuesday evening. Parts of Germany have not seen such severe flooding in centuries. However, in the Czech Republic, the water level has stabilised in the capital Prague, where there had been fears of a repeat of disasters in 2002 and 1997. Helicopters started removing residents from their homes in Deggendorf on Wednesday after two levees along the Danube and Isar rivers broke. Firefighter Alois Schraufstetter said the floodwater in the Bavarian town was 3m (9.8ft) high. "This is a life-threatening situation," he was quoted as saying by Germany’s DPA news agency. Four farmers were rescued at the very last minute by a helicopter before their tractor was submerged, he added. German newspapers said water levels in the eastern city of Halle were at their highest for four centuries. Officials said the city was in acute danger after floodwaters from the Saale river damaged a section of dykes. The level of the River Elbe in the historic German city of Dresden, where at least 600 people were evacuated, is not expected to peak until Thursday morning. Coaches reportedly ferried people out the town of Muhlberg, about 40km (25 miles) northwest of Dresden, as thousands were told to leave on Wednesday afternoon. Chemical plants next to the swollen rivers have been shut down and their chemicals removed over safety concerns, the Associated Press reports. Meanwhile, the floods were receding in the south German city of Passau. People could be seen sweeping up muck from their streets. In the Austrian city of Krems, emergency workers have been shoring up a dyke under threat from the swollen Danube. Thousands of people left their homes in the Czech Republic in recent days as floodwater threatened to overwhelm flood barriers. In the low-lying industrial city of Usti nad Labem, the River Elbe spilled over the 10m-high (33ft-high) metal flood barriers. The main rail link connecting Prague and Berlin in Germany have been underwater, with trains being diverted. Anti-flood barriers have reportedly gone up to protect the Czech capital’s zoo after it was badly hit, causing animals to be evacuated.
Table 3: Inputs 1, 2 for summaries in Table 1.
3.Mike Welch, chief executive of Blackcircles.com, can expect to gain a third of that sale price, while staying with the company. He started selling tyres aged 16 before joining Kwik-Fit. Aged 21, he set up Black Circles, basing it in Peebles, where it employs 50 people. Welch, now aged 36, built it up to annual sales in 2013 of £28m, with annual growth of around 20% per year since 2008. The first three months of this year have seen revenue rise by 34% on the same period last year. The company developed a "click and fit" business model. Customers choose their tyres online, they are then delivered directly from manufacturers to one of 1,350 independent garages, where the customer then books in a tyre-fitting session. According to the chief executive, prices undercut conventional sales by 20%-to-40%. In March, the company announced that it was looking at ways to extend its growth, including a float on a stock exchange, private equity investment, or a sale. It recruited former Tesco boss Sir Terry Leahy onto the board, to use his expertise in retail. There is also a trial of a Blackcircles fitting service at some Tesco superstores. The Michelin deal opens up expertise and a much wider distribution network, without limiting Blackcircles.com to the parent company’s brand products. Michelin already owns the conventional tyre distributor ATS Euromaster, and the French firm hopes there will be synergies between the two distributors, although Blackcircles.com will continue to operate independently within the Michelin group. "I’m delighted to have found in Michelin a partner who shares our passion for customer service, innovation and technology," said Mr Welch. "The strength of the Michelin Group will allow us to underpin the multi-brand offering that we deploy in each garage, on every street corner. "I am convinced that our teams, our customers, our garages and our suppliers will rapidly start to see the benefits of this partnership." Jean-Dominique Senard, chief executive of the Michelin Group, commented: "Our strategy illustrates our ambition: to be ever more innovative, efficient and proactive for our customers by offering them products and services suited to individual needs, and by simplifying the entire purchase process, from choosing their tyres to having them fitted by professionals." Michelin has 68 production plants in 17 countries, and employs 117,000 people. An interview with Mike Welch can be heard on Business Scotland this weekend - at 06:00 on Saturday and 07:30 on Sunday - on BBC Radio Scotland.
4. And many of those communities will have voted Labour. For years this was a party heartland which was home to big beasts like Tam Dalyell and Robin Cook. Before his death, Mr Cook had a majority of more than 13,000 - he commanded the support of more than half of the electorate. But much has changed here. The mines are closed, the economy is now focussed on some remnants of small industry, retail and elsewhere. Livingston and its surrounding towns often acts as feeders for Edinburgh. Robin Chesters is director at the Scottish Shale Industry Museum. "There are still communities here who remember those days," he says, "it’s the parents, it’s the grandparents - but in places like Livingston there have been tremendous changes in population." The Labour candidate here is a vocal supporter of Jeremy Corbyn. And she thinks the Labour leader’s message is appealing to voters. "I think for a long time communities like this were taken for granted the SNP had something really positive to offer - that was independence. But we’ve now seen the reality," she says, referring to a perceived lack of progress under the SNP Scottish government. The choice, she says, is clear: A Labour government or a Conservative government. "I think that’s cutting through." Some here though don’t seem to mind the idea of a Conservative government all that much. The Tories here are buoyed by local election results and national opinion polls. Their candidate thinks he is in with a good chance of beating Ms Wolfson - putting the party once seen as the enemy of miners above Labour for the first time in modern history here. Damian Timson says: "There are two types of Conservatives - there’s this bogeyman conservative that people talk about and then there’s the real conservative; the likes of myself and Ruth Davidson and everyone else and I think at last the message has got out that we’re a party for everyone." But this seat was won comfortably by the SNP in 2015 - Hannah Bardell took even more of the vote that Robin Cook had back in 2005 (she won 57of the vote - a majority of almost 17,000). "People have found that the SNP have been a strong voice for them in Livingston - I’ve done everything in my power to raise constituency issues on the floor of the house," she says. "There has certainly been big changes in Livingston. But what West Lothian and Livingston have been very good at doing is bouncing back - and what the SNP have offered is support for the new industries." The Lib Dem candidate Charlie Dundas will be hoping he improves on his showing from 2015 - when the party won just 2.1% of the vote - losing its deposit and finishing behind UKIP. His pitch? "There’s only one party that is standing up for the two unions that they believe in - Livingston voted to remain in the UK back in 2014; Livingston voted to remain the EU."
Table 4: Inputs 3, 4 for summaries in Table 1.