A Reliability Theory of Truth

01/03/2018
by   Karl Schlechta, et al.
0

Our approach is basically a coherence approach, but we avoid the well-known pitfalls of coherence theories of truth. Consistency is replaced by reliability, which expresses support and attack, and, in principle, every theory (or agent, message) counts. At the same time, we do not require a priviledged access to "reality". A centerpiece of our approach is that we attribute reliability also to agents, messages, etc., so an unreliable source of information will be less important in future. Our ideas can also be extended to value systems, and even actions, e.g., of animals.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

10/27/2020

Coherence of strict equalities in dependent type theories

We study the coherence and conservativity of extensions of dependent typ...
11/03/2018

The IFF Approach to the Lattice of Theories

The IFF approach for the notion of "lattice of theories" uses the idea o...
07/05/2021

Neyman-Pearson Hypothesis Testing, Epistemic Reliability and Pragmatic Value-Laden Asymmetric Error Risks

Neyman and Pearson's theory of testing hypotheses does not warrant minim...
08/07/2017

SmartMTD: A Graph-Based Approach for Effective Multi-Truth Discovery

The Big Data era features a huge amount of data that are contributed by ...
07/10/2020

A concise modification of Marshall-Olkin family of distributions for reliability analysis

The significance of Marshall-Olkin distribution in reliability theory ha...
11/07/2016

Truth Discovery with Memory Network

Truth discovery is to resolve conflicts and find the truth from multiple...
04/28/2021

Truth and Knowledge

The main subjects of this text are: (1) Generalization of concepts and...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 The Coherence and Correspondence Theories of Truth

See [Sta17a] for an overview for the coherence theory, and [Sta17b] for an overview for the correspondence theory. The latter contains an extensive bibliography, and we refer the reader there for more details on the correspondence theory.

We think that the criticisms of the coherence theory of truth are peripheral, but the criticism of the correspondence theory of truth is fundamental.

The criticism of the correspondence theory, that we have no direct access to reality, and have to do with our limitations in observing and thinking, seems fundamental to the author. The discussion whether there are some “correct” theories our brains are unable to formulate, is taken seriously by physicists, likewise the discussion, whether e.g. Quarks are real, or only helpful “images” to understand reality, was taken very seriously. E.g. Gell Mann was longtime undecided about it, and people perhaps just got used to them. We don’t know what reality is, and it seems we will never know. See also discussions in Neurophilosophy, [Sta17d] for a general introduction.

On the other side, two main criticisms of the coherence theory can be easily countered, in our opinion. See e.g. [Rus07] and [Tha07] for objections to coherence theory. Russell’s objection, that and may both be consistent with a given theory, shows just that “consistency” is the wrong interpretation of “coherence”, and it also leaves open the question which logic we work in. The objection that the background theory against which we check coherence is undefined, can be countered with a simple argument: Everything. In “reality”, of course, this is not the case. If we have a difficult physical problem, we will not ask our baker, and even if he has an opinion, we will not give it much consideration. Sources of information are assessed, and only “good” sources (for the problem at hand!) will be considered. (Thus, we also avoid the postmodernist trap: there are standards of “normal reasoning” whose values have been shown in unbiased everyday life, and against which standards of every society have to be compared. No hope for the political crackpots here!)

Our approach will be a variant of the coherence theory, related ideas were also expressed by [Hem35] and [Neu83].

We can see our approach in the tradition of relinquishing absoluteness:

  • The introduction of axiom systems made truth relative to axioms.

  • Nonmonotonic reasoning allowed for exceptions.

  • Our approach treats uncertainty of information, and our potential inability to know reality.

1.1.1 A Short Comparison

  1. Our approach is not about discovery, only about evaluating information.

  2. In contrast to many philosophical theories of truth, we do not treat paradoxa, as done e.g. in [Kri75] or [BS17], we assume statements to be “naive” and free from semantic problems.

    We do treat cycles too, but they are simpler, and we take care not to go through them repeatedly. In addition, our structures are assumed to be finite.

  3. On the philosophical side, we are probably closest to the discourse theory of the Frankfurt School, in particular to the work by J. Habermas and K. O. Apel (as we discovered by chance!), see e.g.

    [Wik18b], [Sta18b], [Hab73], [Hab90], [Hab96], [Hab01], [Hab03].

    Importantly, they treat with the basically same methods problems of truth and ethics, see our Remark 1.1 (page 1.1) below.

    We see three differences with their approach.

    1. A minor difference: We also consider objects like thermometers as agents, not only human beings, thus eliminating some of the subjectivity.

    2. A major difference: We use feedback to modify reliability of agents and messages. Thus, the -quantifier over participating agents in the Frankfurt School is attenuated to those considered reliable.

    3. Conversely, their discourse theory is, of course, much more developed than our approach.

    Thus, an integration of both approaches seems promising.

  4. Articles on trust, like [BBHLL10] or [BP12], treat different, more subtle, and perhaps less fundamental, problems. A detailed overview over trust systems is given in [SS05].

    We concentrate on logics, cycles, and composition of values by concatenation. Still, our approach is in methods, but not in motivation, perhaps closer to the basic ideas of trust systems, than to those of theories of truth, which often concentrate on paradoxa.

    Articles on trust will often describe interesting ideas about details of coding, e.g. [BP12] describes how to code a set of numerical values by an interval (or, equivalently, two values).

  5. Basic argumentation systems, see e.g. [Dun95], will not distinguish between arguments of different quality. Argumentation systems with preferences, see e.g. [MP13], may do so, but they do not seem to propagate conflict and confirmation backwards to the source of arguments, which is an essential part of our approach. This backward propagation also seems a core part of any truth theory in our spirit. Such theories have to be able to learn from past errors and successes.

1.2 Two Examples

Some examples may help to illustrate our ideas.

Example 1.1

(+++ Orig. No.: Example Size +++)

LABEL: Example Size

Suppose we are interested in the size of an object We cannot access directly, and have to rely on witnesses.

Witness had a meter, measured and says is 120 cm long. Unfortunately, is known to be crackpot.

Thus, we limit our sources of information to reliable ones.

Witness had no meter, he measured using his thumb, and later calculated the length to be 90 cm.

Witness had a meter, but the meter was old and twisted, so not very accurate. says that is 101 cm long.

By experience, ’s method is superior to ’s method.

This is all we know.

Based on this information, we say “our best estimate is that

is 101 cm long”.

We do not doubt that there is some “real” length of but this is irrelevant, as we cannot know it. We have to do with what we know, but are aware that additional information might lead us to revise our estimate.

This story seems simple, but even much more complicated stories can be solved by essentially the same, simple ideas, we think.

Remark 1.1

(+++ Orig. No.: Remark Extensions +++)

LABEL: Remark Extensions

The following extensions seem possible:

  • Actions and animals We can apply similar reasoning to actions. The action of a monkey which sees a lion and climbs a tree to safety is “true”, or, better, adequate.

  • Values Values, obligations, “natural laws” are subjective. Still, some influences are known, and we can try to peel them off. Religion, politics, personal history, influence our ideas about values. One can try to find the “common” and “reasonable” core of them. For instance, religious extremism tends to produce ruthless value systems, so we might consider relious extremists as less reliable about values.

We give another example. Reliabilities will be denoted

Example 1.2

(+++ Orig. No.: Example Temp +++)

LABEL: Example Temp

We have a meteorological station in Siberia. The thermometer is supposed to be reliable, it automatically records the current temperature (with time stamp etc.) reliably. below. This is “reality”, which is introduced purely as a trick, to illustrate simple cases, we discard this later.) Sometimes, we want to know the current temperature immediately, we phone the human operator or operators and ask. Unfortunately, the line or lines is/are very noisy, and errors in transmission occurr. This is the common part.

Case 1:

The human operator is absolutely reliable. Later, we compare the temperature as transmitted by phone with the recorded temperature, and assign to the transmission

The values of the absolutely reliable thermometer etc. are unchanged.

Case 2:

We have two reliable human operators, they use different unreliable phone lines. So we have transmissions and Assume that, e.g. based on previous experience, we gave and already initial values

If and agree ( the temperatures received, without any knowledge of the “real” temperature!), we increase and they confirm each other.

If and disagree (without any knowledge of the “real” temperature!), we decrease and as they contradict each other, based on their initial values.

Case 3:

As it is so cold, the human operators often drink too much, so they are not reliable. They make mistakes, and the transmission is not reliable, either. (To simplify, we assume that mistakes do not cancel each other, e.g. the operator reads 10 degrees too much, and the line transmits 10 degrees too little, so the correct value is transmitted.)

Case 3.1:

We consider only one human operator and one transmission line and compare the value with the recorded temperature. We first calculate the combined reliability of the chain

If the transmitted temperature agrees with the recorded value, we increase the combined reliability and break this down to increases of and according to their previous values.

If they disagree, we first decrease the combined reliability and break this down again to decreases of and

Case 3.2:

We consider both operators and transmission lines, and compare the received values. Let be the human operators, the transmission lines. We proceed as in Case 3.1, but first adjust both and as in Case 2, and break the adjustments down to as in Case 3.1.

2 Concepts and Basic Ideas

2.1 Concepts

2.1.1 Information vs. Facts

LABEL: Section Information

  1. Our idea is close to intuitionistic logic. We have to distinguish information and facts. We might be convinced (or informed with high reliability) that holds, but have information with low reliability that and with low reliability that For instance, two witnesses saw a car, one says it was black, the other that it was not black, but both concede that visibility was very bad, so they cannot be sure.

    Thus, we should have but not necessarily equality.

    The same example shows that it is not necessarily true that

  2. The reliability order is independent of the usual truth value order this is obvious.

  3. The reliability order is also independent of a normality order. Thus, we might have reliable information that Tweety is an abnormal bird, and unreliable information that is a normal bird.

    Likewise, unreliability of is different from In the first case, we may think that does always hold, but have no reliable information about this. In the second case, we are certain that, in most cases, if holds, then so does

    Moreover, we may have uncertain information that is more normal than Tweety.

  4. Objects we talk about may have names. Two physicians may talk about the same patient (they are sure about this), may be unsure about a diagnosis, and disagree about it.

2.1.2 Semantics and Representation Theorems

We have to distinguish semantics on the level of details (message, reliability, inertia, etc., agents which send and receive messages, process them, etc.), and semantics on the abstract level. The latter does not exist, this would presume that there is some “reality”, a notion we try to avoid.

But we can distinguish statements above a certain level of reliability, this is “relative truth”. Recall that might be more reliable than the known reliabilities of and separately, so, depending on the level of reliability we chose, formulas may be analysed, or not. (Likewise for etc.)

Representation cannot be relative to a semantics, but only relative to formulas above a certain level of reliability. Of course, this is basically the same as classical representation, but without the philosophical overhead. Here, it is just about describing a certain set of formulas in a perhaps concise way.

In preferential structures, the normality relation may also have uncertain reliabilty (see above), so we may reason only with normality above a certain threshold.

Moreover, we may have static representation (about what holds beyond a certain level of reliability), and dynamic representation (about how values develop under new input).

2.1.3 Coding

The details of coding are not important here, we indicate some problems and possible solutions.

  • Inertia

    If some value is supported by many sources, it should be more stable under challenges than a value supported by only a few sources. We may code this by “inertia”.

  • Loops

    We want to avoid that person thinks highly about person and vice versa, but they just defend each other. Such loops should be detected, and it should be avoided to go in circles, enforcing each other. For such reasons, we may code the path of messages, and their consequences, and use message ’s. Thus, every agent may detect if this message chain has been going past him already.

    On the other hand, if agent sends a wrong message, this should fall back on him, so this kind of cycle is a good one. Sending the whole chain along allows to distinguish both cases.

    (We do not pretend that this is how the brain works, it is just some way to achieve the aim.)

    We illustrate this with an example

    Example 2.1

    (+++ Orig. No.: Example Cycles +++)

    LABEL: Example Cycles

    • (a) We have an agent considered reliable, sends a message to agent that agent is reliable. is considered reliable (e.g. little noise), too. is considered reliable, and sends a message to that is reliable. is again considered reliable. has past information that is indeed reliable. So answers that seems correct, and has increased reliablity, going back to and But we should stop here, and not send again with increased reliability. Sending the history avoids this.

    • (b) Agent sends a message to that agent is reliable. considers and somewhat reliable (e.g. by default), and increases the reliability of Now, sends a message to (with increased reliability!) that is reliable. So increases the reliability of This might still be acceptable, but we should not send again with increased reliability. (We assume all messages are free from noise, reliable.) Again, the history of the messages can detect such cycles.

2.2 Agents, Messages, and Reliabilities

Definition 2.1

(+++ Orig. No.: Definition Agents +++)

LABEL: Definition Agents

  1. We define as usual etc.

  2. We have agents etc. and messages, etc.

    Agents may be people, devices like thermometers, transmission lines, theories, etc.

  3. If agent sends a message to agent will be called the source of its destination.

  4. Agents and messages have values of reliability, etc. Sometimes, it is more adequate to see reliability as degree of competence, for instance for moral questions.

  5. Messages may be numbers, e.g. a temperature, but also opinions about something, e.g. “the earth is flat”, also about the reliability of agents and messages. Messages may also be moral judgements. If a message is a moral judgement, then moral competence of the source of the message is important. If the source is the constitution of a country, the competence will probably be considered high, etc. In case of theories, the messages may be consequences of this theory, etc.

  6. A human agent may be a good chemist, but a poor mathematician, so his reliability varies with the subject. We neglect this here, and treat this agent as two diffent agents, -Chemist, -Mathematician, etc. Likewise, a thermometer may be reliable between 0 and 30 but less reliable below 0. Again, we may describe them as two different thermometers.

  7. Agents may also have doubts about the reliability of their own messages, a human agent may doubt its competence, a thermometer if the value is out of its intended range.

    1. Version 1

      Reliabilities are just values in and ordered in the usual way. is maximal reliability, is maximal uncertainty.

    2. Version 2

      Partial orders:

      Reliabilities (of agents or messages) will be multisets of the form will be a real value between -1 and should be seen as a “dimension”.

      This allows for easy adjustment, e.g. ageing over time, shifting importance, etc., as we will shortly detail now:

      • the real values allow arbitrarily fine adjustments, it is not just

      • the dimensions allow to treat various aspects in different ways,

      • for instance, we can introduce new agents with a totally “clean slate”, 0 in every dimension, or preset some dimensions, but not others,

      • the uniform treatment of all dimensions in Section 3.1 (page 3.1) is not necessary, we can treat different dimensions differently, e.g., conflicts between two agents in dimension need not touch dimension etc.

      We have arbitrarily many dimensions, with possibly different meaning and treatment, and within each dimension arbitrarily many values. This is not a total order, but within each dimension, it is.

2.3 Basic Data Structure

  1. Values (the value of an agent is his reliability)

    1. short version

      value, reliability (of value), inertia (of value)

    2. long version

      value, reliability (of value), history (of value)

      (history: past pairs value, reliability )

  2. messages

    1. short version

      value, reliability (of value)

    2. long version

      value, reliability (of value), history

      (history: chain of messages with source, destination, and pairs value, reliability of messages)

      In the long version, we may give each message chain an ID, so it can easily be identified, and distinguished from other messages. The ID is given at the start, and passed on to subsequent messages. (These are details of coding the basic ideas.)

2.3.1 Comments

The long versions give the complete relevant history. Inertia may, e.g., code the number of messages already gone into the pair.

The history of messages allows to detect loops, directions of messages, etc. A messages chain may come back to a prior node (agent), but should not go again in the same direction. This eliminates unwanted feedback, see Example 2.1 (page 2.1). The history may need more information than just outlined, our emphasis is more on principles, than on details of coding.

2.4 Processing

As our discussion is mostly conceptual, it suffices to indicate problems and solutions without going into too much detail. In addition, we think that precise values may depend on the domain treated (more or less caution), and their importance should be dampened by a suitable overall algorithm.

  1. AND

    When agent with reliability sends a message with reliability then the combined reliability should be at most

    Note that Modus Ponens is a form of AND, so it should be treated the same way.

  2. OR

    When two messages with the same value have reliability and respectively, then the value should have reliability at least See Section 2.1.1 (page 2.1.1).

  3. NOT

    This was also discussed above in Section 2.1.1 (page 2.1.1).

3 Details - an Example

LABEL: Section Details

In this section, we work with a total order. We may, however, extend the approach to sequences of total orders (and thus to the partial order idea) by an elementwise treatment. In this case, we may have to initialize new dimensions to some default value: If the operations work on more than one reliability, any dimension which is present in some, but not all reliabilities, will be added where necessary with the default value.

Definition 3.1

(+++ Orig. No.: Definition Operations +++)

LABEL: Definition Operations

  1. If and we define the weighted mean of and as (Similarly for more than two ’s.)

For the partial order idea:

  1. The average reliability of is defined as the reliability in dimension

  2. We may want to give more reliable messages etc. more weight. E.g. for we may want to calculate the weighted mean of depending on their individual reliabilities. We can do this as follows:

    Consider and If take Then and this will be the weight for The weight for will be If similarly.

3.1 Combinations

LABEL: Section Combinations

We discuss now a number of cases. The solutions are suggestions, often, one will find alternatives which might be as good or even better. Our discussion is more centered on basic ideas than on details - which might depend on context, too. A good overall algorithm will probably be quite robust against local changes.

In the following, we will treat only conflicts between two agents/messages. Of course, also situations like three values, 8, 9, and -1, need to be treated (example due to D. Makinson), where we will give more credibility to than to the exceptional value -1. We treat in the following only pairs, so which should go in the same sense as treating the triple

3.1.1 and

LABEL: Section A-M

  1. From to

    If agent sends message without any reliability, the reliability of will by default be the reliability of (It may be adjusted later, due to other messages.) If it has already an initial reliability, the reliability will be the combined (in the spirit of AND) reliability of the agent and the message.

  2. From to

    When was modified, this should have repercussions on If was increased, should increase, too, if was decreased, should decrease, too. The effect should be “dampened” however. One wrong message should not totally destroy the reliability of the agent. For this purpose, we have introduced the inertia (of the agent). The bigger the inertia, the less we change E.g., inertia may code the number of values which went already into

    We may do this with a new message going “backwards”. If we store the path of the message, we avoid going in cycles, i.e. going again to etc.

3.1.2 Chains of Messages and Reliabilities

LABEL: Section Chains

See Example 1.2 (page 1.2), Case 3.

Suppose agent sends message and agent passes on, perhaps with some modification, so this is message

  1. The combined message will have some It seems natural to set

  2. Conversely: Suppose we have modified and given it a new value We have to break down the modification to new and in a reasonable way, so that again

    Recall that the old was calculated as say where We adjust now and to and such that again e.g. using the same factor on and

3.1.3 Two Parallel Messages

LABEL: Section Parallel

Different agents might send messages with reliability and respectively about the same subject. Those messages might agree, or not.

Case 1:

The messages agree. The reliabilities support each other, and both and should increase, where a smaller one should perhaps increase more.

Case 2:

The messages disagree. The reliabilities contradict each other, and both and should decrease, where a smaller one should perhaps decrease more.

Details are left to the reader.

3.1.4 Two Parallel Messages About Reliability

LABEL: Section Par-Rel

Suppose agent sends message with reliability with reliability the contents of is a reliability that of a reliability where and are about the same agent or message Suppose that or has old reliability We want to calculate a new reliability (of or

It seems reasonable to calculate the new reliability from respecting inertia of and the respective reliabilities and The new inertia of should increase, as more values go into than went into

Details are left to the reader.

4 Discussion

Our approach is very pragmatic, and takes its intuition from e.g. physics, where a theory is considered true - but revisably so! - when there is “sufficient” confirmation, by experiments, support from other theories, etc.

Many human efforts are about establishing reliability of humans or devices. An egineer or physician has to undergo exams to assure that he is competent, a bridge has to meet construction standards, etc. All this is not infallible, experts make mistakes, new, unknown possibilities of failure may appear - we just try to do our best.

Our ideas in Section 3 (page 3) are examples how it can be done, but no definite solutions. The exact choice is perhaps not so important, as long as there is a process of permanent adjustment. This process has proven extremely fruitful in science, and deserves to be seen as a powerful method, if not to find truth, at least to find “sufficient” information.

From an epistemological point of view, our position is that of “naturalistic epistemology”, and we need not decide between “foundationalism” and “coherentism”, the interval has enough space to maneuvre between more and less foundational information. See e.g. [Sta17c].

Our approach has some similarities with the utility approach, see the discussion in [BB11], the chapter on utility. An assumption, though false, can be useful: if you think a lion is outside, and keep the door closed, this is useful, even if, in fact, it is a tiger which is outside. “A lion is outside” is false, but sufficiently true. We think that this shows again that truth should not be seen as something absolute, but as something we can at best approximate; and, conversely, that it is not necessary to know “absolute truth”. We go beyond utility, as improvement is implicit in our approach. Of course, approximation may only be an illusion generated by the fact that we develop theories which seem to fit better and better, but whether we approach reality and truth, or, on the contrary, move away from reality and truth, we cannot know.

There are many things we did not consider, e.g. if more complicated, strongly connected, structures have stronger inertia against adjustment.

We have an example structure which handles these problems very well: our brain. Attacks, negative values of reliability, correspond to inhibitory synapses, positive values, support, to excitatory synapses. Complex, connected structures with loops are created all the time without uncontrolled feedback. It is perhaps not sufficiently clear how this works, but it must work! (The “matching inhibition” mechanism seems to be a candidate. See also

[OL09]

for a discussion of the “cooperation” of excitatory and inhibitory inputs of a neuron. E.g., excitation may be followed closely by inhibition, thus explaining the suppression of such feedback. The author is indebted to Ch. von der Malsburg, FIAS, for these hints.) Our theories about the world survive some attack (inertia), until “enough is enough”, and we switch emphasis. The brain’s mechanisms for attention can handle this.

4.1 Is this a Theory of Truth?

The author thinks that, yes, though we hardly mentioned truth in the text.

Modern physics are perhaps the best attempt to find out what “reality” is, what “truly holds”. We had the development of physics in mind, reliability of experiments, measurements, coherence of theories (forward and backward influence of reliabilities), reputation of certain physicists, predictions, etc. Of course, the present text is only a very rough sketch, we see it as a first attempt, providing some highly flexible ingredients for a more complete theory in this spirit.

5 Acknowledgements

The author would like to thank David Makinson for very helpful comments.

References

  • [1]
  • [BB11] A. G. Burgess, J. P. Burgess, “Truth”, Princeton University Press, Princeton, 2011
  • [BBHLL10] J. Ben-Naim, -F. Bonnefon et al., “Computer-mediated trust in self-interested expert recommendations”, AI and society 25 (4): 413-422, 2010
  • [BP12] J. Ben-Naim, H. Prade, “Evaluating trustworthiness from past performances: interval-based approaches”, Annals of Math. and AI, Vol. 64, 2-3, pp 247-268, 2012
  • [BS17] T. Beringer, T. Schindler, “A Graph-Theoretical Analysis of the Semantic Paradoxes”, The Bulletin of Symbolic Logic, Vol. 23, No. 4, Dec. 2017
  • [Dun95]

    P. M. Dung, “On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and

    -person games”, Artificial Intelligence 77 (1995), pp. 321–357

  • [Hab01] J. Habermas, “On the pragmatics of social interaction”, MIT Press, 2001
  • [Hab03] J. Habermas, “Truth and justification”, MIT Press, 2003
  • [Hab73] J. Habermas, “Wahrheitstheorien”, in Fahrenbach (ed.), “Wirklichkeit und Reflexion”, Pfuellingen, 1973
  • [Hab90] J. Habermas, “Moral consciousness and communicative action”, MIT Press, 1990
  • [Hab96] J. Habermas, “Between facts and norms: contributions to a discourse theory of law and democracy”, MIT Press, 1996
  • [Hem35] C. G. Hempel, “On the logical positivists’ theory of truth”, Analysis, 2:49-59, 1935
  • [Kri75] S. Kripke, “Outline of a Theory of Truth”, The Journal of Philosophy, Vol. 72, No. 19, 1975, pp. 690-716
  • [MP13] S. Modgil, H. Prakken, “A general account of argumentation with preferences”, Artificial Intelligence 195 (2013) 361-397
  • [Neu83] O. Neurath, “Philosophical papers 1913-46”, R. S. Cohen and M. Neurath (eds.), Dordrecht and Boston, D. Reidel, 1983
  • [OL09] M. Okun, I. Lampl, “Balance of excitation and inhibition”, Scholarpedia, 2009
  • [Rus07] B. Russell, “On the nature of truth”, Proceedings of the Aristotelian Society, 7:228-49, 1907
  • [SS05] J. Sabater, C. Sierra, “Review on computational trust and reputation models”, Artificial Intelligence Review, 2005
  • [Sta17a] Stanford Encyclopedia of Philosophy, “The coherence theory of truth”
  • [Sta17b] Stanford Encyclopedia of Philosophy, “The correspondence theory of truth”
  • [Sta17c] Stanford Encyclopedia of Philosophy, “Epistemology”
  • [Sta17d] Stanford Encyclopedia of Philosophy, “The Philosophy of Neuroscience”
  • [Sta18b] Stanford Encyclopedia of Philosophy, “Juergen Habermas”
  • [Tha07] P. Thagard, “Coherence, truth and the development of scientific knowledge”, Philosophy of Science, 74:26-47, 2007
  • [Wik18b] Wikipedia, “Diskursethik”, 2018