Adams Conditioning and Likelihood Ratio Transfer Mediated Inference

by   Jan A. Bergstra, et al.
University of Amsterdam

Forensic science advocates the use of inference mechanisms which may be viewed as simple multi-agent protocols. An important protocol of this kind involves an agent FE (forensic expert) who communicates to a second agent TOF (trier of fact) first its value of a certain likelihood ratio with respect to its own belief state which is supposed to be captured by a probability function on FE's proposition space. Subsequently FE communicates its recently acquired confirmation that a certain evidence proposition is true. The inference part of this sort of reasoning, here referred to as likelihood ratio transfer mediated reasoning, involves TOF's revision of its own belief state, and in particular an evaluation of the resulting belief in the hypothesis proposition. Different realizations of likelihood ratio transfer mediated reasoning are distinguished: if the evidence hypothesis is included in the prior proposition space of TOF then a comparison is made between understanding the TOF side of a belief revision step as a composition of two successive steps of single likelihood Adams conditioning followed by a Bayes conditioning step, and as a single step of double likelihood Adams conditioning followed by Bayes conditioning; if, however the evidence hypothesis is initially outside the proposition space of TOF an application of proposition kinetics for the introduction of the evidence proposition precedes Bayesian conditioning, which is followed by Jeffrey conditioning on the hypothesis proposition.



page 1

page 2

page 3

page 4


Calculating the Likelihood Ratio for Multiple Pieces of Evidence

When presenting forensic evidence, such as a DNA match, experts often us...

An Empirical Comparison of Three Inference Methods

In this paper, an empirical evaluation of three inference methods for un...

Reasoning with Mass Distributions

The concept of movable evidence masses that flow from supersets to subse...

Bayesian Reasoning and Evidence Communication

Many resources for forensic scholars and practitioners, such as journal ...

Global Conditioning for Probabilistic Inference in Belief Networks

In this paper we propose a new approach to probabilistic inference on be...

Bounded Conditioning: Flexible Inference for Decisions under Scarce Resources

We introduce a graceful approach to probabilistic inference called bound...

Possibilistic Conditioning and Propagation

We give an axiomatization of confidence transfer - a known conditioning ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Writing this paper was triggered by the following question stated in Lund & Iyer [28]: why not separately communicate the two likelihoods that make up a likelihood ratio? An answer to this question is given in Paragraph 8.3 below.

Courtroom reasoning involving Bayesian inference has become a protocol by means of which a forensic expert (FE) may interact with a trier of fact (TOF). The setup requires that both FE and TOF maintain their own space

, resp.  of propositions, and that both maintain a belief state that is formalized as a probability function resp.  on the respective proposition spaces.

When a single probability function is used the model involves precise beliefs. When collections of probability functions are made use of, so-called non-singleton representors, the model admits imprecise beliefs. Imprecise beliefs may be helpful or even needed when besides uncertainty, the realm of probability functions, also ignorance is being modelled and under the assumption that ignorance cannot be adequately represented by means of the same probability function that is used for the representation of an agent’s uncertainty. Comments regarding imprecise beliefs in connection with Bayesian inference are given in Paragraph 7.3 below.

Following a tradition initiated in forensics by Lindley (e.g. [27]) and Evett (see [18] for a recent statement on his position), who in turn based their work on the principles of subjective probability as set out by by Ramsey, de Finetti, Carnap and Jeffrey, a range of contemporary authors shows commitment to the exclusive usage of precise belief states, see for instance Berger & Slooten [4], Berger et. al. [3] and Biedermann [13, 14]. Independently of forensics, theory development concerning precise beliefs has advanced in different directions, for instance in Diaconis & Zabell [17], Bradley [15], Gyenis [23], and Yalcin [50]. Below I will make use of Bradley’s presentation of Adams conditioning in [15].

Below I will focus on the question how information about one or two likelihoods, or concerning a single likelihood ratio which an agent transfers to an agent may be incorporated in either ’s background knowledge or in ’s belief function . I will consider four ways in which the recipient may accommodate likelihoods or likelihood ratios in its own belief state: (i) using Adams conditioning twice thereby producing two precise belief states as intermediate posteriors from which subsequent Bayesian conditioning generates the intended posterior of the likelihood ratio mediated reasoning protocol, (ii) upon receiving a likelihood ratio, guessing a decomposition of it and then applying a single step of simultaneous Adams conditioning in advance of Bayesian conditioning, (iii) using proposition kinetics to add a propositional primitive to a temporary additional proposition space of which is then revised by Bayesian conditioning, followed by Jeffrey conditioning on ’s belief function,111Attention is limited to precise belief functions because I am convinced by the conventional argument in favour of the use of precise belief states, namely that using imprecise belief states would leave TOF with an unmanageable burden of proof methodology and technology. Nevertheless by using imprecise belief states as intermediate belief states proposition kinetics may just as well be described. After one or more steps of constraining (steps of belief kinetics which reduce imprecision using the terminology of Voorbraak [42]) the belief state may be turned back into a precise belief state ready for subsequent Bayesian conditioning with the required effect. and finally (iv) performing two steps of Bayes conditioning on two proposition spaces in a family with subsequent Jeffrey conditioning.

1.1 “Starring”: trier of fact (TOF) and mediator of evidence (MOE)

Although likelihood ratio transfer mediated reasoning has become quite established in a forensic setting the principle is more generally applicable.222The central role of likelihood ratios in reporting in forensic science (which includes forensic science based forensic practice) is strongly emphasized in the ENFSI guidelines (Willis et al. [49]). In a forensic context the trier of fact (TOF) has the role of determinating the the truth of certain statements. Below such statements are referred to as hypothesis propositions. The TOF role may be played by a judge or by a jury. TOF may say “guilty” (or “not guilty”). The TOF may be in need of a science backed interpretation of available evidence. Providing such information is delegated to the forensic expert. In order to obtain a more generally applicable presentation I will speak of a mediator of evidence (MOE) rather than of a forensic expert (FE). TOF and MOE are the two major roles in any account of likelihood ratio transfer mediated reasoning.

1.2 Methodological assumptions and choices

This paper is written on the basis of certain assumptions which are worth mentioning.

  1. The paper is written as a contribution to the development of probability theory in the context of signed meadows, a joint project by Alban Ponse (University of Amsterdam) and myself, initiated in the spring of 2013. The technical work may be viewed in that light whereas an attempt made in Section 

    7 to derive conclusions with relevance for forensic science and forensic logic in particular is a separate theme, quite disconnected from the intended advancement of meadow theory by considering potential applications of it.

  2. The technical work is done having in mind, at least initially, the paradigm of subjective probability theory with precise probabilities quantifying the strength of an agent’s belief. This is done for the simple reason that this paradigm provides ample motivation and justification for the use of a range of transformations of probability functions. A the same time an open mind is called for with respect to other paradigms that might provide a justification for the same or a similar body of theoretical developments.

  3. Reading (and understanding) the very extensive foundational literature on probability theory is a substantial challenge. I have no basis for the claim that the results in this paper are new except that I did not yet find these results in this form elsewhere. Unfortunately, absence of evidence implies no evidence of absence in this sort of case.

  4. More specifically I must be quite cautious with making any claims regarding the technical correctness, the philosophical and methodological adequacy, and of course the novelty and originality of the following methods, and notions which are defined and used in the paper: (i) the use of repeated single likelihood Adams conditioning for TOF side processing of transferred (incoming from MOE) likelihoods in advance of Bayes conditioning (for processing incoming evidence), (iii) the use of double likelihood Adams conditioning in advance of Bayes conditioning, (iii) the use of an additional (auxiliary and temporary) proposition space if the evidence proposition is not included in the proposition space of TOF, (iv) the use of Jeffrey conditioning as a follow up for Bayes conditioning, (vii) the distinction between likelihood pairs and likelihood ratios, (vi) the distinction between synchronous likelihood pairs and asynchronous likelihood pairs, (vi) the notion of single message reporting (by MOE), (vii) the suggestion of parallel decomposition of a MOE, (viii) the perception of precise belief functions as a semantic model for a calculus of revisions of probability functions, (ix) the successive stages for making the semantic model less abstract: finite sets of belief functions, finite sets of uniformly structured finite progressions of belief functions, and finite sets uniformly structured finite progressions of finite families of proposition spaces and corresponding belief functions.

  5. The role of documented representations of belief functions for TOF is to support TOF in the determination of its eventually internalized beliefs. Psychological research has shown that unsupported human agents are likely not to follow the rules of probability calculus and methodologically justified belief revision in the manner that probability theory would prescribe. Stated differently, if TOF consists of human agents a theory of belief revision for TOF is not meant describe the unsupported behaviour of TOF. Conversely this theory and its calculus are supposed to support TOF in achieving a reliable and defensible performance.

    Therefore any usable and explicit calculus or notation for belief states amounts to no more than a supportive tool for TOF, and the calculus is merely a toolkit. In agreement with Fenton, Neil & Berger[19] it is to be expected that in actual application TOF will make use of automated support, and that the calculation of actual beliefs and of belief state revisions will not become a task for human agents. However, at certain stages values of the current belief function are used by TOF and are incorporated in its own mental belief function.333By consequence there is no rationale for raising fundamental objections against using imprecise beliefs in the toolkit as long as doing so is supportive of the formation of precise beliefs in TOF’s mind, and as long as these “true beliefs” play a proper role in TOF’s decision taking.

  6. It follows from the core of subjective belief theory that belief revisions (by TOF) must always be applied on the entire belief function, thereby taking all elements of the TOF’s proposition space into account.

    More specifically, supposing that TOF’s proposition space is generated by say four propositions: (main hypothesis proposition), (main evidence proposition), (additional hypothesis or evidence propositions). Now a belief revision for TOF must specify all probabilities on the entire Boolean algebra of sentences over these four generators, which can be achieved by providing a specification of 16 values of the revised belief function.

2 Probability calculus on the basis of involutive meadows

A meadow is a structure for numbers equipped with either a name/notation for the multiplicative inverse function (inversive notation) or with a name/notation for the division function (divisive notation). Given either name the other name can be introduced as an abbreviation.

From the perspective of forensic science it is wholly immaterial whether or not there is a name in one’s language for a mathematical function, in this case division. Only when formalizing the logic the presence or absence of names acquires relevance. From the perspective of formalizing the underlying logic, however, it is an important issue, and this observation applies to the case of forensic reasoning.

Once a name and notation (say division with notation given arguments and ) has been introduced the question “what is ” may be posed and is entitled to an answer. Assigning a value to can be done in at least 6 different ways, each of which has been amply investigated in the mathematical and logical literature about how one may deal with the multiplicative inverse of zero. A very straightforward idea, which is adopted below, is to work under the assumption that . This convention must not be understood as expressing a non-trivial insight about numbers and division, which has been overlooked by mainstream mathematics until now, so to say. This convention to set merely represents a choice (and only one choice out of a number of options) on how to base one’s logic on a formalized version of arithmetic. Using on top of the standard axioms for numbers (the axioms for a commutative ring) leads to what is called an involutive meadow in [8].

Developing precise logics for application in forensic science begins with the choice of a logic for arithmetic as a step towards having a logic for the values serving as probabilities. Mathematics does not provide such logics, however, which is rather the task of mathematical logic. Working with involutive meadows, a subclass (in fact variety) of the class of ring based meadows, is just one option for choosing a logic of numbers. The approach via meadows derives from the computer science tradition where rational numbers are viewed as an abstract datatype. My preference for working with involutive (ring based) meadows derives from a preference for working with equations and conditional equations over the explicit use of quantifiers which comes with the use of full first order logic.444I notice a marked absence of quantified formulae (in particular and for a formula

) in the forensic science literature. In computer science quantifiers are used all over the place. This relative lack of prominence of quantifiers is at first sight at odds with the frequent use of the term logic in forensics. However, assuming that only universally quantified formulae are used, and taking into account the convention (which prevails both in logic and in mathematics) to omit explicit mention of quantifiers while having universal quantification as a default I entertain the view that the forensic science community implicitly shares my preference for working with a fragment (implicitly universally quantified formulae) of first order logic rather over working with full first order logic. This fragment, however, is more expressive than equational logic, which for instance does without negation. When working in equational logic the logical operators (negation, conjunction, disjunction, material implication) are dealt with as ordinary mathematical functions as well. This latter convention is by no means generally accepted and it undeniably comes with its own complications, but I will use it because it strongly facilitates the use of equations and conditional equations.

Below decimal notation will used freely under the assumption that abbreviates and so on.555In Bergstra & Ponse [11] the formalization of decimal number notation by means of ground complete term rewriting systems, a useful shape of abstract datatype specification in preparation for prototyping implementations, is studied in detail.

2.1 Proposition spaces and probability functions

Formulae and equations below are to be understood in the context of the specification 666For: Boolean algebra + meadows + sign function + a probability function named . taken from Bergstra & Ponse [10] for a probability function with name over an event space , which takes form of a Boolean algebra of events and for which a finite collection of constants is available.777For this specification a completeness theorem that was proven in Bergstra, Bethke & Ponse [6] for is extended in [10]. The equational specification

is extended with so-called conditional values, playing the role of random variables, and expectation values in Bergstra 

[5]. As is common in the forensic literature I will refer to events as propositions below. Formulae, however, involve sentences and syntax. I will write for a sentence while reserving for its interpretation, a notation I won’t make use of however, and I apologise in advance for confusion that may be caused by a sometime rather imperfect (sloppy) application of these conventions. Besides a Boolean algebra of propositions there is a Boolean algebra of sentences, which need not be a free Boolean algebra either.

Throughout the paper I will use Jeffrey’s notation for lambda abstraction with a single variable: given a context : .

2.2 Conditional probability: variations on a theme

Following [10] is defined by

The superscript indicates that whenever . In Bergstra & Ponse [10] several other options for defining conditional probabilities when taking the possibility of into account are discussed. For instance,

satisfies: , which fits well with material implication for two-valued logic. When dealing with Bayesian conditionalization safe conditional probability, written as , may be helpful:

The advantage of safe conditional probability is that (using Jeffrey’s notation as a “dedicated” instance of lambda abstraction) is a probability function for all , which is not the case for and for . Denoting with the “undefined” outcome of a function, the conventional notion of a conditional probability due to Kolmogorov reads as follows:

Here stands for if b 0 then a else c. The conditional operator allows a straightforward definition in ring based involutive meadows:888In the presence of weaker equations are needed: such as , and

Notwithstanding the fact that corresponds best with what ordinary school mathematics has to say about division, properly formulating its logic is far more involved than developing the logic needed to work with or with or .999The choice made below for using rather than or is merely a matter of taste. Using the conditional operator the definitions of and can be made more illuminating:

The literature on conditional probabilities taking probability zero for the condition into account is quite complex. Popper functions, nonstandard probabilities, Renyi’s conditional probability, and De Finetti’s coherent conditional probability come into play. Working with excludes some of these options for dealing with conditional probability functions, but choosing to work with an involutive meadow does not by itself introduce such kind of bias. On the contrary by taking an involutive meadow as the point of departure one is well-placed to proceed with the formalization of each of the mentioned options (and more) for the definitions of conditional probability functions.

2.3 Relevance of meadows for the work in this paper

The use of meadows specifically in this context of this particular paper is listed in the following items.

  • All equations are meant to be valid for all substitutions of values for variables. All conditions for equations to hold must be made explicit.101010No attempt is made in the paper to work out these matters in full detail. In many cases instead of an assumption that say it is assumed or derived that , which given the theory of meadows nearly amounts to the same. The claimed advantage is that working with meadows allows to achieve 100% precision in these matters in principle. However, compared with a conventional style of mathematical writing, regarding matters of division by zero, working with meadows does not imply or induce any additional commitment to a formalistic and possibly overly detailed approach.

  • Various forms of Bayes’ theorem take the form of derivable equations (see Bergstra and Ponse 

    [10] and Bergstra [5]).

  • Proofs of equations can be given relative to the equational proof system (usually in combination with some equational and explicit operator definitions). No additional import of a theory of real numbers or of set theory is required.

  • A particular semantic problem that permeates conventional school mathematics is avoided. Suppose one insists that in the world of rational numbers the assertion

    holds in general. Many formulations of Bayes’ theorem presuppose this assumption. Then the following logical complication arises: for to be true it is required that is true. Thus

    Assuming a classical two-valued logic must be either true or false. But conventional mathematics is reluctant to commit to either option.

    Three-valued logics provide a solution, but the proof systems become unexpectedly complex.

    A computer science related perspective is to have a temporal interpretation of implication (), thus turning propositional calculus into a so-called short circuit logic. The idea is that if a condition to an implication fails the conclusion is left unevaluated. The logical details of the short-circuit perspective have been worked out in ample detail in Bergstra, Ponse & Staudt [12]. These details, however, are prohibitively complex for application in forensic logics.

    Close to conventional mathematical intuition is to work with partial functions and to formalize arithmetic using a logic of partial functions. Designing logics of partial functions constitutes an intricate subject, however, providing no easy solutions.

  • Writing about an expression in no way has the side effect of introducing the assumption that is non-zero.

    The relevance of this matter may be clarified with an example: If in colloquial language and within informal mathematics asks for the conditional probability that some agent has sold some object F under the assumption (condition) that has stolen that object, already implicitly states (requires, assumes) that the probability that has stolen the object is non-zero.

  • The equational logic that comes with the meadow approach is not supportive of introducing constraints of the form or of the form , because these assertions are not equations and are not equivalent to any equations. An assumption of the form by itself is also conceptually non-trivial, in spite of its very common occurrence in explanations of Bayes’ theorem and of Bayes conditioning.

    To appreciate this difficulty one may notice that the constraint cannot be conveniently expressed in the language of probability functions with precise beliefs. From the perspective of subjective probability with beliefs represented by precise probability functions, either the assumption that must be considered not to qualify as information with relevance to an agent’s belief (because otherwise the agent should be able, by definition of the concept of subjective probability, to encode the level of uncertainty resulting after learning that in a probability function with precise values), or the limitation to precise values must be lifted, a step of significant magnitude in the current (2016) state of affairs in forensic logic.

  • On the basis of meadows providing a formalization of aspects of probability calculus relevant for the discussion below is relatively simple; and providing such a formalization in terms of equational logic is feasible.

2.4 Transformations of proposition spaces and corresponding precise belief functions

The work in this paper will be quite sensitive to the precise shape of proposition spaces and probability functions on proposition spaces. Rather than working out these matters by way of preparation to the sequel of the paper, in full detail, I will limit this presentation to the mentioning of scattered aspects, while giving definitions by way of representative examples rather than in a more general notational setting.

The proposition space of an agent, say , will be denoted with . If it is known that the proposition space is generated by primitive propositions, say , I will write , if there are only two generators one has e.g.  or .

A belief function (supposedly encoding beliefs of agent ) maps each proposition in a proposition space to a value in a meadow. I will only make use of the meadow of rational numbers below. More sophisticated work may call for reals (or even for complex numbers, as is the case in the non-commutative probability theory of quantum mechanics).

A belief function is best thought of as a pair , though below the domain is often left implicit. A limited number of transformations on belief functions will play a role in this paper.

Bayes conditioning (without proposition kinetics).

Let for example . Suppose with . Then is obtained by Bayes conditioning if it satisfies the following equation:

It follows that and the proposition space is left unaffected.111111Bayes conditioning comes under alternative names: Bays conditioning, Bayes’ conditioning, Bayes conditionalization, Bayes’ conditionalization, Bayesian conditioning, Bayesian conditionalization. In this paper only Bayes conditioning and Bayesian conditioning is used.

Bayes conditioning with proposition kinetics.

Let once more . Suppose with . Then is obtained by Bayes conditioning if satisfies the following equation:

Bayes conditioning with proposition kinetics removes from with the effect that after Bayes conditioning with respect to the proposition space has been reduced to .

Bayes conditioning on a non-primitive proposition.

Let, again by way of example, the proposition space of have three generators: . Suppose is a closed propositional sentence making use of primitives and . Suppose with . Then is obtained by Bayes conditioning on if it satisfies the following equation:

When conditioning on a non-primitive proposition kinetics does not apply, i.e. the proposition space is left as it was.

Jeffrey conditioning.

Let for example . Suppose . Then is obtained by Jeffrey conditioning if it satisfies the following equation:

Jefrey conditioning involves no proposition kinetics. Bayesian condition may be understood as the version of Bayes conditioning without proposition kinetics.121212Jeffrey conditioning has finite as well as infinitary versions. According to Diaconis & Zabell [17] only its infinitary versions are stronger than any Bayesian rules.

Proposition space reduction.

Consider , one may wish to forget about say . Proposition kinetics now leads to a reduced proposition space in which only the propositions generated by and are left.

Proposition space reduction constitutes the simplest form of proposition kinetics.

Parametrized proposition space expansion.

Let . One may wish to expand to a proposition space by introducing to it in such a manner that a subsequent reduct brings one back in .

is left unchanged but and must be fixed with definite values. A specification of the new probability function, say (with domain is: with and appropriate rational number expressions. If one intends to extend to four additional values for the probability functions are needed and so on.

Symmetric proposition space expansion.

Let . One may wish to expand to a proposition space by introducing to it in such a manner that a subsequent reduct brings one back in but one may not wish to guess any parameters. Now it suffices to assert that for each closed propositional expression over the propositional primitives and . , in other words all parameters are chosen with value .

Base rate inclusion.

This is a special case of parametrized proposition space expansion, and a generalization of symmetric proposition space expansion. Let be a closed value expression with , and assume that is a new proposition name. is introduced in order to include the base rate (for some relevant type of event, named ) in the probability function.

The probability function is extended as follows: , for all sentences not involving .

Single likelihood Adams conditioning.

Let be a rational number, (given by a closed expression for it). Assume that and are among the generators of . Single likelihood Adams conditioning leaves the proposition space unchanged and transforms the probability function to (leaving out the subscript for ease of notation).

Double likelihood Adams conditioning.

Let be two rational numbers, (each given by a closed meadow expressions). Assume that and are among the generators of . Double likelihood Adams conditioning leaves the proposition space unchanged and transforms the probability function to .

2.5 A labeled transition system of credal states

A pair is the mathematical (if one prefers logical) counterpart of an agent ’s state of beliefs. As a may have beliefs not captured in one often speaks of ’s partial beliefs or of ’s partial state of beliefs.

Thus contains (as elements of ) and quantifies (via ) only some of the agent’s beliefs. More generally plays the role of a credal state in a model of the kinetics (dynamics) of ’s credences. Two credal states and are called compatible if the same propositions (or rather sentences) of have probability 1 under as under .

Now one may imagine the collection of all credal states (assuming these are of the form of a probability function on a Boolean algebra which takes values in the signed meadow of rationals) over finite subsets of a countable set of propositional atoms.

Each of the transformations as outlined above in Paragraph 2.4 may be viewed as a rule which generates so-called labeled transitions. Labels are derived from rules, and the label created from a rule contains information about the name of the transformation and possibly of parameters, while the transition itself is between the prior and posterior state of the transformation. There is significant freedom in the details of designing labels.

For instance Bayes conditioning without proposition kinetics has label with , it requires of the prior credal state that and that the posterior credal state is . For Bayes conditioning with proposition kinetics the label can be chosen as , and for Bayes conditioning on a nonprimitive proposition the label may be used. In this manner all transformations of credal states can be understood as rules generating transitions from one credal state to the next and it turns out that a portfolio of transformations, for instance such as given in Paragraph 2.4, provides the definition of a labeled transition system.131313This is the way one often looks for an interpretation of a system of states and transformations in mathematical computer science.

With the collection is denoted of all labels that come about when turning the portfolio of transformations listed in paragraph 2.4 into transition rules. Metavariables and range over credal states, asserts that there is a transition from to with label . The labeled transitions system obtained from is denoted . The collection of credal states with proposition space generated by finite set is denoted where indicates that the states have precisely proposition space .

Definition 2.5.1.

A bisimulation on is a family of relations on for all finite which satisfies the following requirements:

  1. for all finite and , , ( is reflexive),

  2. for all finite and , if then ( is symmetric),

  3. for all finite and , if , and then ( is transitive),

  4. if

    1. are finite subsets of ,

    2. ,

    3. ,

    4. ,

    5. , and,

    6. ,

    then for some :

    1. , and

    2. ,

Bisimulation relations play a key role in computer science, as well as in modal logic. This definition has been spelled out in much detail because the notion may be unknown in forensic statistics. The following facts can be easily shown:

  • When taking the identity relation for each finite , a bisimulation relation is obtained.

  • If and then if it is the case that for some bisimulation , it must be the case that and are compatible. Suppose otherwise, then say and for some sentence in . But then admits a Bayes conditioning step (perhaps on a non-primitive proposition) on whereas does not admits such a transition.

  • Let contain all pairs of compatible credal states over , then is a bisimulation relation, and moreover it is the maximal bisimulation relation.

Both and are so-called trivial bisimulations. Of these is proper while is degenerate. Using computer science terminology, is called fully abstract if there is no non-trivial bisimulation on which identifies more pairs of credal states than itself, that is if it is a maximal non-degenerate bisimulation.

Open question.

The following question seems to be open: is a fully abstract bisimulation on ?

In the absence of an answer to this question one may easily check that any bisimulation between and lacks any intuitive appeal. Thus I arrive at the (preliminary) conclusion that , by default equipped with may serve as a standard model of credal states. In view of the restriction to rational values for probabiliities this model is in fact the rational valued standards model of credal states, () whereas working with real values in full generality produces the real valued standard model () of credal states. Below I will focus on the rational model of credal states merely because that restriction provides a useful simplification.

2.6 The standard model of credal states: the status of assuming precise of beliefs

Having portrayed as the standard model of credal states several remarks can be made:

  • represents the mathematical model behind the Lindley framework: subjective beliefs with beliefs represented as precise probabilities on a well-defined space of propositions.141414The Lindley framework produces a model for forensic reasoning based on subjective probability theory plus precise belief assumption. Ignoring variations on the theme I will write as if one may refer with “Lindley framework” to a definite position which arose from the works of Ramsey, de Finetti, Carnap, Lindley, Evett, and which is now represented by authors including Berger, Biedermann, and Taroni.

  • By viewing as the standard model it is not expressed that it is the most useful model or the best model for applications inside or outside forensics. What it expresses is that in mathematical terms this model comes first and that other models, when contemplated at all, are preferably viewed as more sophisticated variations on the same theme.

  • The main chacteristic of the standard model is the adoption of the principle that beliefs must be precise. Although I am not convinced that the latter principle can be elevated to the status of an irrefutable “axiom” for forensic logic, I appreciate very much the pragmatic value of the restriction to precise beliefs for the development of theory, as indeed this single assumption provides so much structure.

  • My position concerning the principle that subjective beliefs must be precise is that theory of forensic logic is best developed under this very assumption, at least initially, while keeping an open mind for more inclusive perspectives on both the notion and the representation of beliefs. When it comes to the well-known fallacies there is no indication that the analysis of these forms of erroneous reasoning requires the contemplation of imprecise beliefs. In other words: the primary analysis of well-known fallacies is to be done within the setting of precise beliefs. Concerning transposition of the conditionals my proposal for an analysis of that particular fallacy is in given in Theorem 8.1.1 below.

  • Concerning the adoption of imprecise beliefs the observation can be made: there can be no objection against the use of imprecise beliefs if these serve the purpose of obtaining information regarding precise beliefs. The situation may be compared with the status of negative numbers: one may dispute the status of negative numbers as numbers, but one may hardly dispute the status of negative numbers as a tool for the investigation of properties of natural (non-negative) numbers. In the setting of subjective belief theory an analogous argument indicates that the introduction of imprecise beliefs may be first of all conducted with the intent of providing a reasoning tool within a setting based on precise beliefs. Stated yet differently: when introducing sets of numbers as a tool for number theory one need not speak of a paradigm shift in the direction of imprecise numbers. On the contrary it is taken for granted that by introducing sets of numbers the concept of number is not affected. Similarly calculating with sets of belief functions (non-singleton representors) does not change the notion of a belief.

2.7 Options for refined transition systems for proposition and belief kinetics

Instead of considering bismulations that identify more credal states than it is meaningful to look for different models of credal states which have the property that the model (that is the standard model) can be obtained from it by working modulo an appropriate bisimulation relation. Such models provide a less abstract picture of credal states from which the standard model may be obtained by further abstraction. At least three different patterns for refining the standard model of credal states can be distinguisghed.151515In Section 7 below some additional comments on these different mechanisms for obtaining refined models of credal states from the standard model will be given.

  • Pointed finite sets of probability functions for each proposition space. This refinement of the standard model allows to capture a restricted form of ignorance by providing zero or more alternatives to a belief function. This is option may be relevant for an expert witness (MOE) who, in addition to a belief wishes to express a perspective on the statistics of the process that led the witness to the reporting of that particular belief.

  • Finite progressions of probability functions, allowing to attach to a belief some account of how it came into existence by way of a sequence of preceding transformation. This option may be relevant for an expert agent (MOE) who wishes to report at the same time (single message reporting) on the development of its beliefs during some preceding episode.

  • Instead of having a fixed proposition space for a credal state, each credal sate may be based on a finite family of proposition spaces, each equipped with its own probability function. This refinement can deal with circumstances where no universal probability function can be defined on the probability space generated by the union of the generators of the respective proposition spaces in the family. This kind of generalisation plays a role in quantum mechanics. Whether or relevant applications of proposition space families can be found in the area of forensic logic remains to be seen.

3 Belief kinetics for likelihood ratio transfer mediated reasoning

Likelihood ratio transfer mediated reasoning (LRTMR) is meant to refer to a spectrum of reasoning patterns used at the receiving side of probabilistic information.161616It is worth mentioning that logical aspects of courtroom proceedings related reasoning worth of formal scrutiny arise in quite different context as well. For instance the implicit proof rule for the probability of a conjunction as listed in Arguello [1] seems to be wrong. LRTMR is a container for Bayesian reasoning patterns. Fienberg & Finkelstein [20] provide a historic account of the use of Bayesian reasoning in US trials and propose the avoidance of fallacious reasoning as the primary objective for the promotion of Bayesian reasoning in the light of a substantial difficulty to get the content of Bayesian reasoning across to legal professionals and members of jury’s. In order to emphasize the general nature of the protocols and methods for LRTMR, and in order to simplify the presentation of expressions and proofs, I will use instead of TOF and instead of MOE.

In this Section it is assumed that the proposition space of is left unchanged during the reasoning process. In other words, there is only belief kinetics but no proposition kinetics.171717In the literature on subjective probability theory instead of belief kinetics the phrase belief dynamics is used as an alternative, and instead of proposition kinetics the phrase proposition dynamics occurs.

Ignoring proposition kinetics while focusing on belief kinetics, constitutes a significant simplification, while it seems to be consistent with current practice in forensic reasoning. For these reasons the case involving belief kinetics only is considered the primary case in this paper.181818The simplest examples of LRTMR initially involve only involve a single hypothesis proposition and might for that reason be qualified as involving proposition dynamics in addition to belief dynamics. The following outline of LRTMR serves as a point of departure for some more technical work.191919It is worth mentioning that in this paper no attempt is made to unify all or many reasoning patterns based on Bayesian conditioning. For instance the reasoning pattern discussed by Stephens in [38] lies outside the patterns considered below.

MOE is not expected to determine ready made posterior probabilities for (transferal to) TOF because MOE needs to be informed about its prior beliefs by TOF in order to compute suitable posteriors. The idea behind this restriction is that TOF should be under no pressure to disclose its priors to MOE because these priors are exclusively relevant (if at all) for intra TOF deliberation.

3.1 Preparation: evidence transfer mediated reasoning and the Taxi Color Case

The simplest reasoning pattern involving ’s reaction to (way of processing of) an input from occurs if maintains a proposition space and a precise belief function defined on it for which . In these circumstances it may occur that sends to a message to the extent that according to the proposition is true (more precisely: the sentence denotes a valid proposition), or in other words that . trusts and intends to adopt the belief that is certainly valid. reacts to the input from by performing Bayesian conditioning on , thereby revising its belief function to .

A paradigmatic example of this reasoning pattern occurs in the so-called Taxi Color Case as specified in detail in Schweizer [35].202020Schweizer [35] contains a detailed description of the taxi color case, together with a useful survey of precise terminology in German about forensic reasoning patterns involving likelihood transfer and Bayesian conditioning. In Schweizer [36] the similar bus color scenario is mentioned in an exposition concerning the legal value of base rates. I will decorate the the case description with some additional details. In a town (here TCCC, Taxi Color Case City), in total 1000 taxis circulate, 150 of which are green and 850 of which are blue. A witness stated that (s)he saw a defendant leave with a green taxi from a specific location, in particular taking the first taxi in the taxi queue in front of restaurant at 23.00 PM.

I will simplify the case in comparison to Schweizer’s description in Schweizer [35] by assuming that

includes the estimate of a base rate for the correctness of

’s testimony. According to ’s background knowledge it may be expected in general for a witness operating in the conditions of at the time of the reported event that the witness (not just the actual witness but also some average of test candidates) will report the color of the taxi correctly with a probability of 80%. In Schweizer’s description, in contrast, investigates the statement of the witness, including an investigation concerning ’s ability to correctly report about the color of a taxi including for instance (my details) information regarding the position from where she claimed to have been standing at the alleged time of ’s departure by taxi, and taking into account the overall illumination of the scene.

determines a proposition space with two propositions: (hypothesis proposition asserting that left with a green taxi), and (evidence proposition asserting that according to ’s testimony left with a green taxi). uses, lacking other data, the base rate on operational taxi’s (irrespective of location and time) of 150/1000 to set , and uses the base rate of 80% valid reporting (for both colors) to set: and , so that and . It follows that . Thus serves as a prior credal state for .

Now one assumes that obtains evidence from in the form of its assertion that made a testimony which may be faithfully rendered at the relevant level of abstraction as , so that may now assume that is true. Given the acquired additional information may mitigate the consequences of its prior adoption of base rates, as a sources for its prior credal state, by applying Bayesian conditioning on . with in particular: and

The example may be decorated with more detail by having for instance a proposition as a base rate proposition included with (see the listing of transformations of propositions spaces and probability functions above), thus obtaining a proposition space for .212121Here the idea is not to indirectly involve (via base rate inclusion) the mechanism of proposition kinetics in the response of on the information obtained from but merely to make use of (or to suggest the use of) proposition kinetics in ’s process of prior belief state construction.

The example, as presented here and in contrast with Schweizer’s presentation, makes no use of determination of a single likelihood or of the determination of a pair of likelihoods by . For that reason there is no occurrence of a transfer from to of either a single likelihood, or of two subsequent likelihoods, or of a simultaneously transferred pair of likelihoods, or of the transfer of a ratio between two likelihoods. In subsequent paragraphs a variety of cases is considered where determines a likelihood pair and and subsequently conveys these either in successive steps, or in a single step as a pair or in a single step while merely transferring the ratio of both, with the intent of overruling the respective likelihoods which are given by ’s prior belief function. At this stage it must be emphasized that because proposition kinetics is ruled out (in this Section) the partial prior beliefs of do provide prior values for both likelihoods and consequently for the likelihood ratio.

Transposition of the conditional and the TCC.

In the TCC example (without proposition kinetics) as outlined above, on finds and from which it might be concluded that TCC (with these parameters) provides a counterexample to the so-called fallacy of transposing the conditional. However, in my view this conclusion would be mistaken. Presence of a case of the fallacy of transposing the conditional would suggest that might be (erroneously) inferred from . The objection I raise against that position is that given the fact that one is dealing with a given credal state consisting of a single proposition space and its corresponding precise belief function , there is no sense in which is known (or might be known) in advance of having knowledge of the value of , let alone to infer for any particular from . In other words there is no form of inference going on.

The option to infer for some closed value expression from and possible other data definitely arises if one works with collections of probability functions as specified with equations in such a manner that some but not all values of the probability function at hand are specified so that a value for might conceivably be unknown. However, such an interpretation is at odds with the principle that beliefs must be precise (see Paragraph 2.6 above for a discussion on the status of this principle). Concerning transposition of the conditionals an analysis of that particular fallacy within the setting of precise beliefs is given in Theorem 8.1.1 below.

The relevance of TCC.

The occurrence of a single taxi departure in TCCC provides a remarkably nice case study for theoretical work as it allows an amazing range of further details and significant complications, to mention:

  1. the presence of multiple witnesses, potentially with different reliability and with conflicting assertions, thereby introducing issues of probabilistic independence,

  2. different methods for determining witness reliability ranging from potentially problematic guesswork reported in poor documentation to well documented, scientific research strength (and therefore evidence based) methods of investigation, of course taking into account that likelihoods may be color dependant;

  3. taking other colours into account, taking car model information into account, taking partial (unreliable) number plate information into account, taking taxi management scheduling and monitoring into account;

  4. observations concerning other taxis in the queue in front of restaurant ;

  5. observations on the order of events, was waiting for a taxi, or did (s)he find one waiting upon arrival;

  6. improved base rate estimations, (possibly measuring the relative frequency of green taxis in that particular queue at that time of the day);

  7. a lying witness, withdrawal of a statement by a witness, a forgetful witness;

  8. suspicion of witness intimidation;

  9. conspiring witnesses;

  10. relations between witness reliability and time, method, and and process of interview;

  11. different mechanisms of background knowledge management for , and finally item a range of different interaction scenarios for , , and other relevant agents (e.g. the prosecution or the defendant’s attorney).

3.2 Outline of the LRTMR reasoning pattern

It is now assumed that both and are members of both proposition spaces and . Both and have prior belief states and with respective domains and . The reasoning protocol LRTMR involves the following steps:

  1. It is assumed that and , otherwise the protocol aborts.

  2. determines the value of the likelihood ratio with respect to its probability function .

  3. If the protocol aborts.

  4. communicates to the value and a description of , that is a description of what propositions is an LR of.

  5. communicates its newly acquired information to that it now considers , i.e. being true, to be an adequate representation of the state of affairs.

  6. trusts to the extent that prefers those of ’s quantitative values that communicates during a run of the protocol over its own values for the same probabilities, likelihoods, and likelihood ratios.

  7. takes all information into account and applies Bayesian conditioning to end up with its posterior belief function which satisfies:


The equation that specifies the posterior belief on is equivalent in probability calculus to the more familiar odds form of Bayes’ Theorem:


The description of LRTMR is a drastic abstraction used for the purposes of the paper and many aspects are left unspecified such as for instance: (i) has an invitation to occurred for it to play a role in the protocol, (ii) what is done if the assumptions are not met, (iii) how is an abort of the revision process performed when necessary, (iv) are any assumptions about the absence of background knowledge required for either or for , (v) making sure that checking various conditions does not involve information transfer between and which stands in the way of the properly performing the conditioning operations?

3.3 Belief kinetics I: single likelihood Adams conditioning and local soundness

I will first consider an adaptation of the protocol named LPTMR for likelihood pair transfer mediated reasoning. LPTMR results from LRTMR by modifying step 4 as follows:

determines and such that , , and ;222222It is assumed that and are known as closed expressions with non-zero and non-negative value not in excess of 1 for the meadow of rational numbers. This assumption is implicitly used many times below in order to be able to apply for various terms t. The same use is made of non-zero prior odds and which must as well be known in terms of such expression so as to guarantee and . communicates both and to in addition to information concerning what sentences these values are likelihoods of.

In order to process the incoming information concerning and , first applies the following transformation, thereby obtaining an intermediate (precise) belief function :


Following the exposition of Bradley [15] this is the Adams transformation corresponding to an intended update of likelihood to value .

Next applies Adams conditioning to in order to update its likelihood to value , thus obtaining a second intermediate belief function :


Finally applies Bayesian conditioning to with respect to :


The following facts can be shown concerning this sequence of three conditioning steps:

Theorem 3.3.1.

Given the assumptions and definitions mentioned above, the following identities are true for , and :

From these facts the following conclusions can be drawn:

  • If the intention is to perform conditionalization on then is a plausible candidate for the posterior of after execution of rule LRTMR.

  • The result of conditioning on does not depend on the way and are chosen so that .

  • Conditioning of on is independent from the way is written as a fraction. This independence will be referred to as the local soundness of the LPTMR (likelihood pair transfer mediated reasoning) inference method.

  • If is nonzero the result of conditioning propositions other than with respect to may depend on the particular choice of and .

  • A symmetry argument yields that performing both Adams conditioning steps in the other order leads to the same result.

  • Performing Adams conditioning for and for in either order is equivalent to double likelihood Adams conditioning.


The proof of Theorem 3.3.1 is a matter of calculation on the basis of the available equational axioms and definitions.