Log In Sign Up

Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference

by   Eric Mitchell, et al.
Stanford University

While large pre-trained language models are powerful, their predictions often lack logical consistency across test inputs. For example, a state-of-the-art Macaw question-answering (QA) model answers 'Yes' to 'Is a sparrow a bird?' and 'Does a bird have feet?' but answers 'No' to 'Does a sparrow have feet?'. To address this failure mode, we propose a framework, Consistency Correction through Relation Detection, or ConCoRD, for boosting the consistency and accuracy of pre-trained NLP models using pre-trained natural language inference (NLI) models without fine-tuning or re-training. Given a batch of test inputs, ConCoRD samples several candidate outputs for each input and instantiates a factor graph that accounts for both the model's belief about the likelihood of each answer choice in isolation and the NLI model's beliefs about pair-wise answer choice compatibility. We show that a weighted MaxSAT solver can efficiently compute high-quality answer choices under this factor graph, improving over the raw model's predictions. Our experiments demonstrate that ConCoRD consistently boosts accuracy and consistency of off-the-shelf closed-book QA and VQA models using off-the-shelf NLI models, notably increasing accuracy of LXMERT on ConVQA by 5 for code and data.


page 1

page 3

page 9

page 14

page 16


Pre-trained Transformer-Based Approach for Arabic Question Answering : A Comparative Study

Question answering(QA) is one of the most challenging yet widely investi...

Can NLI Models Verify QA Systems' Predictions?

To build robust question answering systems, we need the ability to verif...

Sunny and Dark Outside?! Improving Answer Consistency in VQA through Entailed Question Generation

While models for Visual Question Answering (VQA) have steadily improved ...

Studying Strategically: Learning to Mask for Closed-book QA

Closed-book question-answering (QA) is a challenging task that requires ...

How Can We Know When Language Models Know?

Recent works have shown that language models (LM) capture different type...

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Although pretrained language models (PTLMs) contain significant amounts ...

ILLUME: Rationalizing Vision-Language Models by Interacting with their Jabber

Bootstrapping from pre-trained language models has been proven to be an ...

1 Introduction

Figure 1:

ConCoRD first generates candidate outputs from the base pre-trained model, then estimates soft pairwise constraints between output choices, and finally finds the most satisfactory choices of answers accounting for both the base model and NLI model’s beliefs.

Reliable and trustworthy AI systems should demonstrate internal self-consistency, in the sense that their predictions across inputs should imply logically compatible beliefs about the world. However, even powerful large language models are known to lack self-consistency (Ray et al., 2019; Elazar et al., 2021; Kassner et al., 2021). For example, a question-answering (QA) model that answers the question Is a sparrow a bird? and Does a bird have feet? with Yes is implicitly expressing the belief that A sparrow is a bird and A bird has feet. If the same model answers the question Does a sparrow have feet? with No, the model expresses the logically incompatible belief A sparrow does not have feet. In such cases, ascertaining the model’s “true” belief is difficult, making interpreting and validating its behavior correspondingly challenging.

Prior work has improved model self-consistency by training with specialized loss functions

(Elazar et al., 2021) or data augmentation (Ray et al., 2019), or alternatively re-ranking model predictions based on their mutual self-consistency using pre-written logical constraints, such as “all mammals have fur” (Kassner et al., 2021). However, the first class of methods requires expensive fine-tuning which might be impractical for many practitioners for very large pre-trained models, and re-ranking methods require an explicit collection of the logical relations of interest, making scaling a challenge. Still, re-ranking-based approaches have the benefit of not requiring fine-tuning, and we hypothesize that their scalability limitations may be addressed by estimating

logical relationships between model predictions on the fly. Specifically, we hypothesize that existing pre-trained natural language inference (NLI) models can estimate logical relationships between an arbitrary pair of model predictions well enough to provide an effective, scalable substitute for explicit collection of such constraints. Leveraging these estimated constraints, we can construct a factor graph representing a probability distribution over model outputs that incorporates both the original model’s confidence scores and the NLI model’s beliefs about logical relationships.

Our primary contribution is Consistency Correction through Relation Detection, or ConCoRD, a framework to improve the consistency and performance of a pre-trained base language model without fine-tuning by using more confident and better attested model predictions to override less confident model beliefs. See Figure 1 for an overview. To enable propagation of model beliefs, we estimate pair-wise logical relationships between model predictions using a pre-trained NLI model. Using these pair-wise relationships, we define an undirected graphical model representing a distribution over responses accounting for both the base model’s beliefs and the NLI model’s estimates of answer compatibility. We efficiently find the approximate mode of this distribution among the base model’s top answer choices for each input as the solution of a MaxSAT problem, which consistently produces more accurate and self-consistent predictions than using the raw model predictions. In Section 4.1 we find that ConCoRD produces an 8.1% absolute improvement in F1 of a pre-trained Macaw model (Tafjord and Clark, 2021) on the BeliefBank QA dataset (Kassner et al., 2021). In Section 4.2 we find a 5.0% absolute improvement in accuracy of a pre-trained LXMERT model (Tan and Bansal, 2019) on the ConVQA dataset (Ray et al., 2019), and in Section 4.3 we find that ConCoRD enables test-time model editing (Sinitsin et al., 2020; Mitchell et al., 2022), updating model predictions at test time when presented with new information.

2 Related Work

Prior work for maintaining consistency in the question-answering space often involves additional training to improve performance. Chen et al. (2021) transform the Natural Questions (Kwiatkowski et al., 2019) dataset question-answer pairs into premise-hypothesis pairs, then uses an NLI model trained on this dataset as a decider for unanswerable questions. Alberti et al. (2019) generate questions from unlabeled texts, then filter them to ensure roundtrip consistency; pre-training on this synthetic set improves performance on SQuAD 2.0 (Rajpurkar et al., 2018) and Natural Questions. Asai and Hajishirzi (2020) augment QA-pairs with their logically symmetric and transitive counterparts through linguistic approaches to enhance cross-dataset QA performance. ConCoRD differs significantly from these question-answering-specific approaches because no fine-tuning of the base model is needed and the methodology is not specific to question-answering.

Figure 2: An example factor graph for a simplified batch with two questions, = What is the capital of Afghanistan? and = What is the capital of Georgia?. Although Tbilisi is the most likely answer for both questions, the assignment of variables that is best under the estimated contradiction constraint flips the answer to the first question to Kabul. The top-2 answer choices for each question are sampled from the base model, and a soft contradiction constraint is detected between variables (representing the truth of the answer Tbilisi for ) and (representing the truth of the answer Tbilisi for ).

Similarly to ConCoRD, Kassner et al. (2021) re-rank model predictions by solving an optimization problem defined by a combination of the base model confidence scores and pair-wise constraints representing the logical compatibility of different model predictions stored in a persistent memory, which they call BeliefBank. The key distinguishing property of ConCoRD is the fact that pair-wise constraints between model predictions are dynamically estimated by a pre-trained NLI model, rather than drawn from a fixed, pre-collected set of constraints. Dynamically estimating the constraints has a variety of benefits, eliminating the need for manually collecting the logical constraints of interest, automating the process of determining whether a particular constraint applies to a particular pair of predictions, and likely inheriting improvements in Natural language inference (NLI, MacCartney and Manning (2008)) models over time.

NLI has long been used to maintain logical consistency in generated dialogue utterances (Welleck et al., 2019; Dziri et al., 2019; Song et al., 2020), radiology report domain entities Miura et al. (2021), and summarization (Laban et al., 2022; Honovich et al., 2022). Perhaps most similarly, Jung et al. (2022) use NLI to estimate constraints between factual statements produced by GPT-3. These prior approaches support our intuition for using NLI models to improve logical consistency among batches of answers. While the authors explore applications of this framework to multi-step reasoning for True/False questions or statements, our work focuses on applying this methodology to more general settings, such as VQA, open-ended QA, and model editing.

3 Consistency Correction through Relation Detection

ConCoRD contains three key components, the base model, a relation model (typically a pre-trained NLI model), and an inference procedure

that combines the predictions of the two models into a more accurate and self-consistent set of beliefs. Importantly, both the base model and relation model are pre-trained, off-the-shelf models; ConCoRD does not update any weights or require training data for either model, using only a small validation set for hyperparameter tuning. We next explain the function of each of these components when executing ConCoRD.

3.1 Base Model

The core function of the base model in ConCoRD is generating a set of candidate outputs for a given input, which are ultimately re-ranked by the inference process (Sec. 3.3). Given a batch of model queries , the first step of ConCoRD is to generate a set of candidate outputs for each query , along with their corresponding likelihoods . Note that the candidate outputs need not be an IID sample from the base model; for example, we might use beam search with a diversity bonus to produce a more diverse set of candidates (Vijayakumar et al., 2018). Each pair of query and candidate output forms a model belief ; the output of the base model is the complete set of model beliefs and their corresponding normalizedprobabilities 222Normalized such that .. The base models in our experiments are pre-trained question-answering models based on T5-large (Raffel et al., 2020) and pre-trained visual question-answering models such as LXMERT (Tan and Bansal, 2019) and ViLT (Kim et al., 2021).

3.2 Relation Model

The relation model

estimates the most likely logical relationship between an ordered pair of natural language utterances from the choices

.333Because relationships are estimated between ordered pairs of utterances, we can form an equivalence relation if fwd-entail is predicted for both orderings of the utterances. In addition to the model beliefs , we define optional context statements , relevant statements that may be retrieved, generated, or manually written for each model belief. The ability to incorporate context statements enables ConCoRD to modulate model behavior independently for each input in the test batch, rather than reasoning transductively about pairs of test inputs. See Table 3 for examples of model beliefs and context statements. Inputs to the relation model are either pairs of two model beliefs or pairs of one model belief and one context statement . We define the most likely inter-belief relation as , and similarly for belief-context relations . The output of the relation model is the set of most-likely relations and their associated probabilities, which we denote as and . Our experiments use various pre-trained NLI models based on RoBERTa (Liu et al., 2019) and ALBERT (Lan et al., 2019) as the relation model.

Question-answer to statement conversion.

While concatenating query and candidate output to produce inputs to the relation model is perhaps the simplest approach to estimating soft constraints, we use a statement conversion model to provide inputs to the relation model that are closer to its training distribution. Instead of defining the belief as concatenation of and , we define to be the statement , where is the conversion model. We fine-tune a small T5 model on a combination of data from (Demszky et al., 2018) and BeliefBank (Kassner et al., 2021) to produce a model that maps a (question, answer) pair into a natural language statement. Details about the fine-tuning procedure and data are provided in Appendix C.

3.3 Inference

ConCoRD’s inference procedure maps the set of beliefs and pair-wise relations into a choice of the most likely belief for each question. To define the inference problem, we first define a binary decision variable representing the estimated truth value of model belief . A value of 1 for node in the maximum likelihood configuration means that is returned for query ; the problem includes a constraint that exactly one candidate answer is true for each query. The factor graph includes the set of variables and various factors (functions mapping a subset of to a non-negative scalar) derived from the base model and relation model’s beliefs and the hard constraint of returning only one answer per question. Factors are defined such that more desirable configurations of yield a larger product of the individual factors. First, unary factors encode the base model’s beliefs about the likelihood of specific answers, and are defined as:



; in other words, the factor takes the odds ratio if the corresponding statement variable

is assigned a truth value of 1; otherwise, the factor takes value 1. In order to encode the hard constraint that exactly one output should be returned for each query, we include a -ary factor for each group of nodes , which is equal to 1 for configurations where exactly one of the nodes takes a value of 1, and 0 for all other configurations.

Binary factors and optionally encode compatibility between pairs of model beliefs (or model belief-context pairs):

where we define the relation function to evaluate to true if its arguments satisfy the underlying relation, and false otherwise; is defined similarly to 444We use this formulation only to accommodate settings were multiple context statements are retrieved for each query; see Section 4.3. We do not have any factors if we are only using the model’s predictions within a batch of test inputs as the premises for reasoning.. The inference problem amounts to finding where


An approximate solution to this inference problem can be efficiently found for most problems with a MaxSAT solver such as RC2 (Ignatiev, 2019). We omit arguments to the factors for conciseness. See Figure 2 for a simple example of a factor graph with a single inter-belief constraint and no belief-context constraints.

Entailment correction.

Consider a belief , a set of its entailed statements , unary factors and , and binary factors . Recall that an entailment relation is satisfied (and the binary factor is maximized) if either or all . Consequently, as the cardinality of increases, the more likely it is that will maximize the product of all binary factors . This is true even if most entailed statements are true, i.e., . If most of the statements entailed by a belief are true, assigning the belief to be false due to a small number of (potentially spuriously) false entailed statements may be undesirable. To mitigate this outcome, we experiment with an additional type of factor in which configurations satisfying entailments with both and are ‘rewarded’ more than other configurations satisfying the entailment:

Applying entailment correction consistently improves ConCoRD’s performance; see Appendix Table 8 for a dataset-by-dataset breakdown.

3.4 Hyperparameters of ConCoRD

We introduce two key hyperparameters to ConCoRD. Because we do not know a priori the relative reliability of the base model and relation model, we introduce the hyperparameter , corresponding to a trade-off between the predictions of the base model and relation model. A value of corresponds to simply taking the raw predictions of the base model, while corresponds to optimizing purely for answers that are self-consistent according to the relation model, without considering the base model’s beliefs. The unary factors in the factor graph become and (and similarly for ). In addition to , we introduce a threshold for relation model confidence to filter out low-confidence relation estimates. That is, we discard a relation or if or , respectively. In practice, we find that the optimal and vary across problems, perhaps due to the varying complexity of the model belief and context statements (and therefore the reliability of the relation model’s predictions). Therefore, we use the hyperopt library (Bergstra et al., 2013) for automated hyperparameter optimization, using the Tree Parzen Estimator (TPE) algorithm to tune and jointly. We use the optimal hyperparameters found on the validation data for each problem to compute test performance. Appendix H.1 details hyperparameter tuning for each experiment.

4 Experiments

Our experiments are broadly designed to answer the high-level question: can ConCoRD leverage the relational knowledge in pre-trained NLI models to produce more accurate, self-consistent system behavior, without additional data or fine-tuning? Further, we investigate ConCoRD’s applicability to performing test-time model editing (Sinitsin et al., 2020; Mitchell et al., 2022), or injection of new information, and ConCoRD’s sensitivity to the choice of hyperparameters and types of relations detected.

4.1 Internal Consistency in Closed-Book Question-Answering

Protocol. To evaluate the accuracy and consistency of a set of beliefs, Kassner et al. (2021) synthesize a gold standard for those beliefs and the inferred relations . Following this prior work, we assume the following is given:

  • [noitemsep,nolistsep]

  • A set of entities

  • A set of unary predicates

  • A collection of “facts” , whose binary truth value is known

  • A directed graph of gold-standard constraints , whose edges represent first-order logical formulae

From these, we construct simple yes/no questions using natural language templates. For example, for fact , if entity represents a lion and predicate represents an ability to drink liquids, the template-generated gold question answer pair is Q: Is it true that a lion is able to drink liquids?; A: Yes.

These questions are given as input to one of two sizes of a multi-angle question answering model (Tafjord and Clark, 2021), given a multiple choice angle with choices Yes. and No. The questions and retrieved answers form a set of beliefs for each entity. Since these are closed-book questions, no context statements are supplied; because they are yes/no questions, only one candidate answer is obtained, i.e., . Question-answer to statement conversion is applied to all questions with a default answer of Yes. regardless of the answer , in order to provide the relation model with positive natural language assertions from which to infer sets of relations ; where the base model answers are No. we replace node in the factor graph with its complement. Configurations are found for each which maximize Equation 2 given and together form a global solution .

Datasets. Kassner et al. (2021) provide a suitable database with 12,636 facts (“silver facts”), each indicating whether one of 601 predicates relates to one of 85 entities, as well as 4,060 confidence-weighted first-order constraints manually gathered from ConceptNet (Speer et al., 2017), forming a constraint graph . Additionally, they provide 1,072 distinct “calibration facts”, each relating one of 7 entities to one of 334 predicates.

We tune and using a validation set of questions generated from the calibration facts, and evaluate test time performance with questions generated from silver facts.

Base ConCoRD G.C.
Model F1 Con. F1 Con. F1 Con.
Mac-Lg 0.831 0.835 0.914 0.920 0.862 0.934
Mac-3B 0.855 0.871 0.931 0.947 0.905 0.936
Table 1: F1 and consistency (1 - ) for two sizes of Macaw (Tafjord and Clark, 2021) QA models, comparing ConCoRD to a naive QA baseline (Base) and ConCoRD with gold constraints (G.C.). ConCoRD significantly improves both F1 and consistency for both models.

Metrics. We measure accuracy using binary F1 between elements of the configuration Z maximizing (as in Equation 2), and the truth value of facts . As in Kassner et al. (2021); we use F1 for evaluation because gold answers are highly biased towards true No. answers.

We compute consistency within batches of questions using the complement of of Li et al. (2019)’s conditional constraint violation metric , defined here as the proportion of relevant gold constraints in which are violated; a constraint is relevant iff, for some entity , there is some belief from fact such that , and there is some belief that corresponds to fact ; the constraint is violated when .

Comparisons. ConCoRD is evaluated against a naive baseline where only base model answers and probabilities are considered. A second baseline (G.C.) performs the inference described in Sec. 3.3, replacing the inferred relations with the gold constraints from constraint graph , rather than those estimated by the relation model.

Results. Results are shown in Table 1. ConCoRD provides an absolute improvement of over 8% in F1 and consistency for Macaw-Large and 7% for Macaw-3B compared to the baseline. Notably, the margin of superiority of the Macaw-3B base model is mostly preserved after applying ConCoRD, suggesting that ConCoRD may provide a significant benefit even for very large models. A surprising result is that ConCoRD shows marked improvements in F1 over the gold constraint baseline, suggesting that the detection and filtering of relations ConCoRD provides may, in this setting, be an improvement over rigid adherence to the logical connections specified a priori in Kassner et al. (2021).

4.2 Internal Consistency in VQA

Base ConCoRD Oracle
Model Acc. P.C. Acc. P.C. Acc. P.C.
LXM 0.656 0.360 0.706 0.409 0.824 0.572
ViLT 0.784 0.489 0.804 0.548 0.882 0.690
Table 2: ConVQA accuracy (Acc.) and perfect consistency (P.C.) of LXMERT (Tan and Bansal, 2019) and ViLT (Kim et al., 2021) VQA models with and without ConCoRD. ConCoRD significantly improves accuracy and consistency of both models. Oracle performance is top-2 performance, as ConCoRD attempts to select the best of the top 2 answer choices of the base model.

Protocol. The Visual Question Answering (VQA) task involves a language model generating answers to questions that are directly associated with images. VQA tests for robustness and generalizability of ConCoRD as it introduces an additional layer of difficulty; the task moves away from purely text-based tasks while expanding the answer space to the vocabulary of the LM being used. The questions from the ConVQA dataset (Ray et al., 2019) and its associated images from the Visual Genome dataset (Krishna et al., 2016) provide an apt setting to assess ConCoRD, as the relatedness of questions for each image provide ample opportunity for model self-inconsistency.

Input & Gold Answer Generations Added context
Q: What was the first capital city of Australia? A: Melbourne Canberra; Melbourne; Sydney; Inverell Melbourne was the initial capital following the 1901 Federation of Australia.
Q: When does the implantation of the embryo occur?
A: around 9 days after ovulation
9 to 18 days; between 6 and 12 days; after the ovulation; on the 9th week In humans, implantation of a fertilized ovum is most likely to occur around 9 days after ovulation, however this can range between 6 and 12 days.
Table 3: Success and failure in editing a model’s behavior with ConCoRD by adding new information to the context. The base model’s highest confidence answer is Underlined. Bold shows ConCoRD’s output after inference; with Teal, bold showing a successful edit increasing F1 and Red, bold showing an edit that reduces F1.

The ConVQA dataset consists of a set of images each associated with a group of related questions about the image, such as What color is the horse? and Is the horse brown? for a picture of a brown horse in a stable. We evaluate ConCoRD with two VQA models, LXMERT Tan and Bansal (2019) and ViLT Kim et al. (2021). For each group of questions , we sample the top-2 candidate outputs for each question, and use a pre-trained NLI model to infer the most likely pair-wise relations between outputs from different questions. We use the RC2 MaxSAT Solver to estimate the configuration that maximizes Equation 2.

Metrics. We report accuracy as the proportion of questions answered correctly across all groups. We infer consistency using a metric previously used in the literature for the ConVQA dataset called "perfect consistency" (Ray et al., 2019). For all groups of related questions, a group is perfectly consistent if all its questions are answered correctly. Perfect consistency then reports the proportion of question groups that were perfectly consistent. While this is not a perfect measure of consistency as it excludes cases in which incorrect answers are consistent with each other, it still serves as a meaningful proxy since the dataset was designed such that any incorrect answer in a question group implies the presence of inconsistency.

Datasets. We divide the ConVQA dataset into a "clean" (i.e. human verified and filtered) test set and a non-test set (train + val + test as defined by Ray et al. (2019)). From the non-test set, we sample 10,000 random images equivalent to 123,746 questions to be used as our validation set for tuning our two hyperparameters. We use the clean test set – 725 images and 6,751 questions – to report our final results.

Model Base ConCoRD Oracle
T5-Sm-NQ 0.207 0.225 0.281
T5-Lg-NQ 0.314 0.328 0.393
T5-3B-NQ 0.332 0.351 0.423
Table 4: Using ConCoRD to inject contextual information into a model’s decisions at test time. Injecting gold Natural Questions contexts consistently improves performance over the base model without requiring fine-tuning.

Comparisons. ConCoRD is compared with a naive baseline and a top-2 oracle upper bound. The naive baseline is the answer with the highest VQA model probability. Top-2 oracle upper bound selects the correct answer if present within the top-2 predictions of the VQA model. Top-2 is appropriate given our use of the top-2 candidate outputs to generate inferences with NLI models.

Results. The final results for ConCoRD, baseline, and oracle upper bound are shown in Table 2. ConCoRD increases the accuracy of LXMERT and ViLT by 5% and 2% respectively, and the consistency of LXMERT and ViLT by 4.9% and 5.9% respectively. Examples in which ConCoRD correctly and incorrectly selects a candidate output different from the baseline output are shown in Figure 4 and Figure 5, respectively. In particular, the incorrect scenarios demonstrate several failure modes that may be in part responsible for the gap between ConCoRD and the oracle upper bound, suggesting further improvements of the components of ConCoRD will also continually improve ConCoRD.

4.3 Test-Time Information Injection

Protocol. We perform an additional experiment to evaluate ConCoRD’s ability to integrate external factual information into its inference process, rather than only using other predictions in the test batch. Such an ability enables editing a model’s behavior at test time, without re-training, as new information becomes available. We use the Natural Questions (NQ; Kwiatkowski et al. (2019)) dataset, rather than BeliefBank, to provide more challenging inputs to the relation model. Given a question from NQ, a sentence from the ground truth context document containing information about the answer is retrieved and provided as an additional input to ConCoRD; we constrain the node representing this context variable in the factor graph to be true. Constraints are predicted between each answer choice and the context statement. As in the other experimental settings, hyperparameters are tuned on the validation set and applied on the test set. See Appendix H for tuning procedures.

Metrics. Model performance is evaluated using the SQuAD F1 score for overlapping tokens555, following the same answer normalization protocols, including lower-casing and removing punctuation.

Datasets. The NQ development set consists of 7830 open-book question-answer pairs, with both long and short gold annotations in their context passages. Since the NQ test set is not available, we create a test and validation set from the NQ validation questions as follows: we take the first 5000 questions to form our test set, and the rest to be our val set, which we use for hyperparameter tuning. Then each set is filtered such that only the answerable questions remain. “Answerable” is defined as having a “short answer" span defined in the annotations. This filtering process gives 2713 test entries and 1576 val entries.

Comparisons. ConCoRD is compared with a naive baseline and an oracle upper bound. All of these approaches operate on the fixed set of QA model answers for a specific QA model (one of T5-Sm-NQ, T5-Lg-NQ, and T5-3B-NQ), specifically the set of top-4 answers for each question. The naive baseline selects the answer with the highest QA model probability, . The oracle upper bound approach selects the answer that has the best score with the gold short answer span, .

Results. The results on the test set using the naive baseline, ConCoRD, and oracle upper-bound are reported in Table 4. ConCoRD always outperforms the naive approach, demonstrating that the framework is useful even when each query input is processed independently (i.e., non-transductively). However, despite providing a relative gain of as high as 8.7% over the naive baseline, there is still a gap between ConCoRD and the oracle. This gap may be attributable to the complexity of the NQ questions and context information compared with the statements in prior experimental settings. Chen et al. (2021)

demonstrate a significant gain in calibration performance from training on MultiNLI

Williams et al. (2018) to training on a combination of MultiNLI and their NLI corpus adapted from NQ, perhaps hinting that crucial knowledge present in Natural Questions is not covered in MultiNLI, partially explaining the gap between ConCoRD and oracle F1 performance. Overall, these results suggest that ConCoRD can reason between context statements and model beliefs in addition to pairs of model beliefs, improving performance even with the increased complexity of the data.

Model Task ConCoRD Only cont. Only ent.
Mac-Lg BB 0.914 0.892 0.827
Mac-3B BB 0.931 0.865 0.917
LXM CVQA 0.706 0.691 0.700
ViLT CVQA 0.804 0.792 0.800
T5-Sm-NQ NQ 0.225 0.225 0.225
T5-Lg-NQ NQ 0.328 0.331 0.330
T5-3B-NQ NQ 0.351 0.349 0.350
Table 5: Ablating the relation types considered in ConCoRD’s inference procedure. The Only cont. and Only ent. are the results of applying ConCoRD with all entailment or contradiction relations removed, respectively. The ConCoRD column is a reproduction of the results from Sections 4.1-4.3, for convenience. Value shown is F1 score for BeliefBank (BB) and Natural Questions (NQ) and accuracy for ConVQA (CVQA). Note that hyperparameters and are re-tuned on the respective validation set for each setting.

Qualitative Analyses. Examples of “good” and “bad” edits (edits that improve and decrease the resulting F1-scores respectively) are presented in Table 3, with more in Appendix F. When the correct answer is not available in the candidate outputs, ConCoRD is capable of pushing towards more partially correct answers and those that have more overlap with the context.

4.4 Ablating Relation Types

Given that we consider two types of relations in our experiments, contradiction and entailment, it is natural to wonder the relative contribution of these to ConCoRD’s performance improvement; Table 5 shows the results of this ablation. We re-run ConCoRD with either entailment or contradiction relations removed, re-tuning the hyperparameters for both of the new settings (contradiction-only or entailment-only). We find that the relative contribution of contradiction and entailment relations varies significantly across models even within the same task, but using both relation types always performs approximately as well or better than using just one, suggesting that both types of detected relations from the NLI model carry useful information. However, we observe in several cases, such as ViLT and the T5 models, that the entailment and contradiction relations may encode somewhat redundant information, as the performance when including either type of constraint alone nearly matches that of using both types.

4.5 Hyperparameter Sensitivity

We perform several experiments to clarify the relationship between the key hyperparameters, including the specific relation NLI model, , and .

Figure 3: Change in ConCoRD’s exact-match validation accuracy as (the NLI confidence threshold) and (tradeoff between base model and relation model beliefs) vary, holding relation model RoBERTa-Large ANLI constant. By comparing the maximum value within each column or row, we conclude that ConCoRD is relatively robust to the choice of , which the choice of is more important. Values are those encountered during tuning with base model ViLT on ConVQA validation questions. Gray squares correspond to regions not evaluated during search, and asterisks (***) mark the region where the maximum increase in accuracy occurs.
Impact of varying relation model.

Table 6 shows a comparison of ConCoRD’s test performance for several NLI models for each setting; notably, the best-performing NLI model is not consistent across problems. While the Albert-XXL model from Nie et al. (2020) is the strongest performing model on NQ, the simpler RoBERTa-Large models outperform it on BeliefBank and ConVQA.

Sensitivity to and .

Figure 3 shows the performance of ConCoRD on ConVQA with ViLT as (the tradeoff between base model and relation model beliefs) and (the NLI confidence threshold) are varied, using the values explored during hyperparameter optimization. Section H.2 of the Appendix shows similar visualizations for different VQA experiments. If multiple hyperparameters within a grid element were explored, the best performing configuration is shown. While the maximum value in each column is the same (0.04), indicating that there exists a good value of for almost any , the converse is not true; for some values of , no good value of exists. Thus, we conclude that the tradeoff parameter is the more important parameter to tune carefully.

5 Discussion & Conclusion

We have presented the ConCoRD framework for enforcing self-consistency in pre-trained language models using relations estimated by pre-trained NLI models, showing that it improves over off-the-shelf performance in a variety of settings without requiring any fine-tuning. Our findings suggest that existing pre-trained NLI models can be a useful building block for boosting performance of NLP systems by providing useful estimates of logical relationships between model predictions across various models and datasets for QA and visual QA.

ConCoRD also suggests several directions for future work. Integrating ConCoRD with methods that generate questions likely to elicit useful knowledge for answering the question at hand (Ray et al., 2019; Shwartz et al., 2020) may further improve performance. In addition, integrating a framework such as ConCoRD with recent methods for differentiation through black box combinatorial solvers (Pogančić et al., 2020) may enable training of the entire base model, relation model, and inference pipeline end-to-end, potentially further improving aggregate performance. Finally, ConCoRD’s general mechanism of re-ranking predictions by estimating the self-consistency of groups of model predictions is applicable beyond natural language, and future work might investigate its application to problems in vision or sequential decision-making. We hope that ConCoRD may serve as another promising example of integrating both neural and explicit symbolic inference machinery into a broader intelligent system that outperforms any of its components individually.

NLI Model Data BB ConVQA NQ
Alb-XXL ANLI 0.892 0.689 0.351
RoB-Lg ANLI 0.931 0.706 0.344
RoB-Lg MNLI 0.918 0.706 0.346
Table 6: Comparing ConCoRD’s performance for various NLI models on BB (BeliefBank), ConVQA, and NQ. Performance is measured as F1 score between predicted and gold text for BB and NQ, exact match accuracy for ConVQA. We use Macaw 3B for BB results, LXMERT for VQA results and T5-3B for NQ results. The best NLI model(s) in each column are bolded; the best NLI model varies across problems.

6 Limitations

While our results suggest ConCoRD can effectively leverage additional compute to boost model performance without fine-tuning, our work has some limitations. Although ConCoRD is conceptually applicable to generations from any language model, our work focuses on question-answering settings to leverage existing self-consistency benchmarks. In addition, ConCoRD increases the compute costs of inference, although it does not require fine-tuning. Further, our results suggest that the best NLI model to use for ConCoRD may vary across domains, requiring some tuning. As NLI models improve, we might hope that the final performance of ConCoRD-like systems should also inherit these gains, but Table 6 suggests that the factors that make a particular NLI model well-suited to a particular problem are not obvious, requiring further investigation.


The authors would like to thank the anonymous reviewers for their helpful feedback during the review period, Gabe Mudel, Julie Wang, Cameron Tew, Anthony Tzen, Kevin Yang, and Ian Ng for helpful discussions and assisting with exploratory experiments early on in the project, and Nora Kassner for providing helpful early guidance in configuring the BeliefBank experiments. CF and CM are CIFAR Fellows. EM gratefully acknowledges funding from the Stanford Knight-Hennessy Graduate Fellowship. JN is supported by Stanford University Medical Scientist Training Program grants T32-GM007365 and T32-GM145402. SL acknowledges brownie bites from Target for providing a crucial fuel source for late night experiment-running.


  • C. Alberti, D. Andor, E. Pitler, J. Devlin, and M. Collins (2019) Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 6168–6173. External Links: Link, Document Cited by: §2.
  • A. Asai and H. Hajishirzi (2020) Logic-guided data augmentation and regularization for consistent question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 5642–5650. External Links: Link, Document Cited by: §2.
  • J. Bergstra, D. Yamins, D. D. Cox, et al. (2013)

    Hyperopt: a python library for optimizing the hyperparameters of machine learning algorithms

    In Proceedings of the 12th Python in science conference, Vol. 13, pp. 20. Cited by: §3.4.
  • J. Chen, E. Choi, and G. Durrett (2021) Can NLI models verify QA systems’ predictions?. In Findings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Dominican Republic, pp. 3841–3854. External Links: Link, Document Cited by: §2, §4.3.
  • D. Demszky, K. Guu, and P. Liang (2018) Transforming question answering datasets into natural language inference datasets. CoRR abs/1809.02922. External Links: Link, 1809.02922 Cited by: Appendix C, §3.2.
  • N. Dziri, E. Kamalloo, K. Mathewson, and O. Zaiane (2019) Evaluating coherence in dialogue systems using entailment. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 3806–3812. External Links: Link, Document Cited by: §2.
  • Y. Elazar, N. Kassner, S. Ravfogel, A. Ravichander, E. Hovy, H. Schütze, and Y. Goldberg (2021) Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics 9, pp. 1012–1031. External Links: Link, Document Cited by: §1, §1.
  • O. Honovich, R. Aharoni, J. Herzig, H. Taitelbaum, D. Kukliansy, V. Cohen, T. Scialom, I. Szpektor, A. Hassidim, and Y. Matias (2022) TRUE: re-evaluating factual consistency evaluation. In Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, Dublin, Ireland, pp. 161–175. External Links: Link, Document Cited by: §2.
  • A. Ignatiev (2019) RC2: an efficient maxsat solver. J. Satisf. Boolean Model. Comput. 11, pp. 53–64. Cited by: §3.3.
  • J. Jung, L. Qin, S. Welleck, F. Brahman, C. Bhagavatula, R. L. Bras, and Y. Choi (2022) Maieutic prompting: logically consistent reasoning with recursive explanations. arXiv. External Links: Document, Link Cited by: §2.
  • N. Kassner, O. Tafjord, H. Schütze, and P. Clark (2021) BeliefBank: adding memory to a pre-trained language model for a systematic notion of belief. In

    Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

    Online and Punta Cana, Dominican Republic, pp. 8849–8861. External Links: Link, Document Cited by: Appendix C, §1, §1, §1, §2, §3.2, §4.1, §4.1, §4.1, §4.1.
  • W. Kim, B. Son, and I. Kim (2021) ViLT: vision-and-language transformer without convolution or region supervision. In ICML, pp. 5583–5594. External Links: Link Cited by: §3.1, §4.2, Table 2.
  • R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L. Li, D. A. Shamma, M. Bernstein, and L. Fei-Fei (2016) Visual genome: connecting language and vision using crowdsourced dense image annotations. In

    International Journal of Computer Vision

    External Links: Link Cited by: §4.2.
  • T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, K. Toutanova, L. Jones, M. Kelcey, M. Chang, A. M. Dai, J. Uszkoreit, Q. Le, and S. Petrov (2019) Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7, pp. 452–466. External Links: Link, Document Cited by: §2, §4.3.
  • P. Laban, T. Schnabel, P. N. Bennett, and M. A. Hearst (2022) SummaC: re-visiting NLI-based models for inconsistency detection in summarization. Transactions of the Association for Computational Linguistics 10, pp. 163–177. External Links: Link, Document Cited by: §2.
  • Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut (2019)

    ALBERT: A lite BERT for self-supervised learning of language representations

    CoRR abs/1909.11942. External Links: Link, 1909.11942 Cited by: §3.2.
  • T. Li, V. Gupta, M. Mehta, and V. Srikumar (2019) A logic-driven framework for consistency of neural models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3924–3935. External Links: Link, Document Cited by: §4.1.
  • Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) RoBERTa: A robustly optimized BERT pretraining approach. CoRR abs/1907.11692. External Links: Link, 1907.11692 Cited by: §3.2.
  • H. Loeliger (2008) An introduction to factor graphs. Note: Cited by: Appendix B.
  • B. MacCartney and C. D. Manning (2008) Modeling semantic containment and exclusion in natural language inference. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), Manchester, UK, pp. 521–528. External Links: Link Cited by: §2.
  • E. Mitchell, C. Lin, A. Bosselut, C. Finn, and C. D. Manning (2022) Fast model editing at scale. ICLR. External Links: Link Cited by: §1, §4.
  • Y. Miura, Y. Zhang, E. Tsai, C. Langlotz, and D. Jurafsky (2021) Improving factual completeness and consistency of image-to-text radiology report generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, pp. 5288–5304. External Links: Link, Document Cited by: §2.
  • Y. Nie, A. Williams, E. Dinan, M. Bansal, J. Weston, and D. Kiela (2020) Adversarial NLI: a new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Cited by: §4.5.
  • M. V. Pogančić, A. Paulus, V. Musil, G. Martius, and M. Rolinek (2020) Differentiation of blackbox combinatorial solvers. In International Conference on Learning Representations, External Links: Link Cited by: §5.
  • C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2020)

    Exploring the limits of transfer learning with a unified text-to-text transformer

    Journal of Machine Learning Research 21 (140), pp. 1–67. External Links: Link Cited by: §3.1.
  • P. Rajpurkar, R. Jia, and P. Liang (2018) Know what you don’t know: unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia, pp. 784–789. External Links: Link, Document Cited by: §2.
  • P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang (2016) SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 2383–2392. External Links: Link, Document Cited by: Appendix C.
  • A. Ray, K. Sikka, A. Divakaran, S. Lee, and G. Burachas (2019) Sunny and dark outside?! improving answer consistency in VQA through entailed question generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 5860–5865. External Links: Link, Document Cited by: §1, §1, §1, §4.2, §4.2, §4.2, §5.
  • V. Shwartz, P. West, R. Le Bras, C. Bhagavatula, and Y. Choi (2020) Unsupervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 4615–4629. External Links: Link, Document Cited by: §5.
  • A. Sinitsin, V. Plokhotnyuk, D. Pyrkin, S. Popov, and A. Babenko (2020)

    Editable neural networks

    In ICLR, External Links: Link Cited by: §1, §4.
  • H. Song, W. Zhang, J. Hu, and T. Liu (2020) Generating persona consistent dialogues by exploiting natural language inference.

    Proceedings of the AAAI Conference on Artificial Intelligence

    34 (05), pp. 8878–8885.
    External Links: Document Cited by: §2.
  • R. Speer, J. Chin, and C. Havasi (2017) ConceptNet 5.5: an open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, pp. 4444–4451. Cited by: §4.1.
  • O. Tafjord and P. Clark (2021) General-purpose question-answering with macaw. CoRR abs/2109.02593. External Links: Link, 2109.02593 Cited by: §1, §4.1, Table 1.
  • H. Tan and M. Bansal (2019) LXMERT: learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 5100–5111. External Links: Link, Document Cited by: §1, §3.1, §4.2, Table 2.
  • A. Vijayakumar, M. Cogswell, R. Selvaraju, Q. Sun, S. Lee, D. Crandall, and D. Batra (2018) Diverse beam search for improved description of complex scenes. Proceedings of the AAAI Conference on Artificial Intelligence 32 (1). External Links: Link, Document Cited by: §3.1.
  • S. Welleck, J. Weston, A. Szlam, and K. Cho (2019) Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 3731–3741. External Links: Link, Document Cited by: §2.
  • A. Williams, N. Nangia, and S. Bowman (2018) A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 1112–1122. External Links: Link, Document Cited by: §4.3.

Appendix A Reproducing Macaw-Large Examples

The following configuration reproduces the Macaw-Large behavior noted in the abstract and the introduction at

$answer$ ; $question$ = Is a sparrow a bird? ; $mcoptions$ = (A) Yes. (B) No. ;

$answer$ ; $question$ = Does a bird have feet? ; $mcoptions$ = (A) Yes. (B) No. ;

$answer$ ; $question$ = Does a sparrow have feet? ; $mcoptions$ = (A) Yes. (B) No. ;

Appendix B Factor Graph Overview

A factor graph is a factorization of a function mapping a set of variables to a non-negative scalar. The factorization is represented as a bipartite graph containing variable nodes and factors; each is represented by one variable node, and each factor maps a subset of the variable nodes to a non-negative scalar. The value of the function is computed as . See Loeliger (2008) for a more complete reference.

Appendix C Question-Answer to Statement Conversion Model Details

To convert question-answer pairs into declarative statements, we combine data from the Question to Declarative Sentence (QA2D) (Demszky et al., 2018) and BeliefBank (Kassner et al., 2021) datasets to fine-tune a T5-base sequence-to-sequence model. QA2D contains question-answer pairs from five QA datasets; 95% of the pairs are from SQuAD (Rajpurkar et al., 2016). The gold statements are from Amazon Mechanical Turk. The BeliefBank questions are created from silver facts using natural language templates as in Section 4.1, and the yes/no answers are from the known binary truth values of these facts. Our training dataset is composed of the full QA2D training dataset of 61k question-answer pairs and half of the BeliefBank silver facts, for a total of 67k training examples. Likewise, the validation dataset consists of the full QA2D validation dataset of 10k pairs and half the BeliefBank silver facts, for a total of 16k validation pairs.

The input to the QA statement conversion model is the concatenation of the question-answer pair . Accuracy is evaluated by comparing the output sequence tokens to the gold sequence tokens. Training occurs with a learning rate of for a maximum of 50 steps, where each step consists of 32k training examples, with early stopping if validation loss does not decrease for 6 consecutive steps. We ran the fine-tuning on NVIDIA GeForce RTX 3090 GPU. Fine-tuning ended after 14 steps with a final training accuracy of 0.764 and validation accuracy of 0.628. This took approximately 40 minutes. Table 7 demonstrates the model’s performance on a few validation examples.

Dataset Input Output Gold statement
SQuAD Who established Yale’s residential college system? Edward S. Harkness Edward S. Harkness established Yale’s residential college system. Edward S. Harkness established Yale’s residential college system.
SQuAD How did Kuhn view the history of science? competing paradigms or conceptual systems Kuhn viewed the history of science as a competing paradigm or conceptual system. Kuhn viewed the history of science as competing paradigms or conceptual systems.
BeliefBank Is it true that a poodle is a river? No A poodle is not a river. A poodle is not a river.
BeliefBank Is a pigeon a living thing? Yes A pigeon is a living thing. A pigeon is a living thing.
Table 7: The QA statement conversion model outputs declarative statements from question-answer pairs. Out of the four validation examples presented, three are correct. The Red, bolded portion of the output of the second example indicates how it differs from the Teal, bolded corresponding portion of the gold statement.
Model Naive w. E.C. w/o. E.C.
Mac-Lg+Rob/ANLI 0.831 0.914 0.909
Mac-3B+Rob/ANLI 0.855 0.931 0.886
LXMERT+Rob/MNLI 0.656 0.706 0.701
LXMERT+Rob/ANLI 0.656 0.706 0.693
ViLT+Rob/MNLI 0.784 0.804 0.810
ViLT+Rob/ANLI 0.784 0.814 0.807
Table 8: Comparison of ConCoRD test performance vs. baseline with and without entailment correction (E.C.) across base+relation models for closed-book question answering (Macaw) and VQA (LXMERT, ViLT) experiments (F1 for closed-book QA, exact-match accuracy for VQA), showing that the entailment correction improves performance for most configurations.

Appendix D Additional Modifications to ConCoRD

A timeout for solvers is imposed in order to prevent the RC2 MaxSAT solver from running optimization indefinitely. The average solve time per question was <4 ms for closed-book QA, <1 ms for VQA and <20 ms for NQ (for NQ, the solve time is < 1/10th of the time needed for a forward pass through the QA and NLI models). We found only one batch of test questions for the closed-book QA task and VQA task where the solver couldn’t find a solution efficiently, so we set a short timeout (30s for CBQA, 10s for VQA, none required for NQ).

We also de-duplicate the list of inferred constraints before passing the statement and constraint groups through the MaxSAT solver so that only the highest-weighted constraints would remain among their duplicates.

Appendix E Entailment Correction Ablations

Table 8 shows the effects of entailment correction on ConCoRD test performance in closed-book question answering and VQA experiments for different choices of base model, using the NLI relation model resulting in the best test set performance (RoBERTa-Large-MNLI).

Appendix F Additional “Good” and “Bad” Edit Pairs

More examples of good and bad edits in the Editing experiment are presented in Table 10. We also include good (Figure 4)and bad flip (Figure 5) examples from the VQA dataset. For the bad flip examples in VQA, we include different failure modes to demonstrate the types of potential ConCoRD errors.

Appendix G Good and Bad Flips

For each set of experiments on the test set, we report the numbers of good and bad flips made by ConCoRD in Table 9. It can be observed that the number of good flips is consistently significantly higher than that of bad flips.

Experiment Model Good Flips Bad Flips
BeliefBank Macaw-3B 723 277
VQA LXMERT 576 238
NQ T5-3B-NQ 168 69
Table 9: The numbers of good and bad flips in each of the experiments performed. We define flips as choosing a different candidate from the naive baseline for the multiple choice experiments, and a binary truth value flip for BeliefBank. “Good” flips are flips that improves performance, and “bad” flips are those that are detrimental to performance.
Model Input & Gold Answer Generations Added context
T5-Sm-NQ Q: Who was the declaration of independence written for?
A: the Second Continental Congress
Second Continental Congress; the United States; the British Crown; Great Britain The United States Declaration of Independence is the statement adopted by the Second Continental Congress meeting at the Pennsylvania State House ( Independence Hall ) in Philadelphia on July 4 , 1776 , which announced that the thirteen American colonies , then at war with the Kingdom of Great Britain , regarded themselves as thirteen independent sovereign states , no longer under British rule .
T5-Sm-NQ Q: What is the scientific name for the calf muscle
A: gastrocnemius muscle
The serratus calf muscle; gastrocnemius muscle; The serratus calfi; The serratus muscle Along with the soleus muscle , the gastrocnemius forms half of the calf muscle .
T5-3B-NQ Q: Who is the actor that plays Dr. Sean Murphy
A: Freddie Highmore
Freddie Highmore; Daryl “Chill” Mitchell; Dylan Christopher Minnette; Javier Muoz The series stars Freddie Highmore as Shaun Murphy , a young surgical resident with autism and savant syndrome at San Jose St. Bonaventure Hospital ., Freddie Highmore as Shaun Murphy : A surgical resident with autism and savant syndrome .
T5-3B-NQ Q: Who is the founder of the Ubuntu project
A: Mark Richard Shuttleworth
Linus Torvalds; Mark Shuttleworth; Richard St. John Hopper; Richard St. John Redmond Mark Richard Shuttleworth ( born 18 September 1973 ) is a South African entrepreneur who is the founder and CEO of Canonical Ltd. , the company behind the development of the Linux - based Ubuntu operating system .
Table 10: Editing a model’s behavior by adding new information to the context. The Underlined generation is the answer with the highest QA model confidence. The Bolded generation is what ConCoRD selects after NLI inference. Teal, bolded generations indicate that ConCoRD selects a generation with higher token overlap F1, while Red, bolded generations indicate that ConCoRD selects a worse generation.
Figure 4: “Good” flip examples from the VQA experiments. The green texts mark the correctly selected answers, while the red texts indicate incorrectly selected answers.
Figure 5: “Bad” flip examples from the VQA experiments. The green texts mark the correctly selected answers, while the red texts indicate the incorrectly selected answers. The bolded texts are the correct answers, if generated within the top-2 predictions. From top to bottom, the first image is an example of when the correct answer, "sheet," was not contained in the predicted answers. The second image is an example of when the conversion of QA pair to statement did not occur as intended and the NLI failed to generate the appropriate inferences that could be used to inform correction of "background" to "buildings. The third image shows an example of when an "incorrect" answer (sky) is effectively the same as the "correct" answer (in sky)–only semantically different. The fourth image shows an example of when the model strongly believed in an incorrect answer and changed another correct answer.

Appendix H Hyperparameter Search Details

h.1 Experiments

h.1.1 Closed-Book Question Answering

Hyperparameters (Section 3.4) are tuned jointly using hyperopt on the BeliefBank calibration dataset (Section 4.1). The search space of is uniform between , and for it is uniform between . hyperopt optimizes cumulative F1 across all entity batches for 300 trials. To speed-up tuning, we created caches of model beliefs and relation sets for each calibration entity . This was run on NVIDIA GeForce RTX 3090 GPU, and the largest NLI models took up to two hours to complete. Using these caches, hyperopt tuning completes in less than an hour on CPU. The best performance on the calibration facts for each of the base Macaw models is reported in Table 11. The results show that is higher for the better base model Macaw-3B.

Model F1 E.C.
Macaw-Large 0.919 0.753 0.855 True
Macaw-3B 0.94 0.804 0.873 True
Table 11: Validation performance on the BeliefBank calibration facts. Both models achieve best validation performance with the RoBERTa-Large ANLI model.

h.1.2 Vqa

Hyperparameters are tuned jointly using hyperopt. The search space for is uniform over , for it is uniform over . A total of 100 trials were performed, updating parameters using TPE, on an AWS g4dn.xlarge EC2 instance. Each search took less than one hour. Table 12 shows the selected parameters and their exact-match accuracy on validation questions.

VQA Acc. E.C.
LXMERT 0.691 0.208 0.805 True
ViLT 0.787 0.395 0.772 True
Table 12: Validation performance on VQA. Both models achieve best validation performance with the RoBERTa-Large MNLI model.

h.1.3 Information Injection with Natural Questions

For this round of experiments, we lower the bounds for and after some initial trials. The bounds of are and the bounds of are . We run hyperopt for 200 trials (often taking approximately 2 to 3 hours on an NVIDIA GeForce RTX 3090 GPU) for each of the three NLI models. Hyperopt optimizes for the highest token-overlapping F1 score in this experiment.

We report the best validation performance of each of the QA base models in Table 13.

Model F1 E.C.
T5-Small 0.227 0.112 0.540 True
T5-Large 0.331 0.081 0.413 False
T5-3B 0.353 0.072 0.477 True
Table 13: Validation performance on NQ. All models achieve best validation performance with the ALBERT ANLI model.

h.2 Visualizing Hyperparameter Search

Figure 6 shows increases in exact-match accuracy as they vary with choices of , , for additional choices of base model for a VQA task, with and without entailment correction, complementing figure 3. Interestingly, choosing a different base model does noticeably effect the optimum value of ; between figures (b)b and (c)c we see the near-optimal region shift towards a value of that gives higher confidence in the base model where the base model produces “better” answers. However, the increase in accuracy is similar, suggesting that with appropriate selection of , ConCoRD can offer similar improvements over a range of choices of base model.

(a) Base model LXMERT, with entailment correction.
(b) Base model LXMERT, without entailment correction.
(c) Base model ViLT, without entailment correction.
Figure 6: As in figure 3, we show changes in exact-match validation accuracy as a function of confidence threshold and tradeoff parameter , with several choices of base model, with and without an entailment correction, holding relation model RoBERTa-Large ANLI constant.