1 Introduction
Appropriate decision making by an agent operating within a multiagent system often requires information from other agents. However, unless the system is fully cooperative, there are typically both costs and benefits to divulging information — while the agent may be able to achieve some goals, others might be able to use this information to their advantage later. An agent must therefore weigh up the costs and benefits that information divulgence will bring it when deciding how to act. One of the most critical factors in this calculation is the trust placed in the entity to which one is providing the information — an untrusted individual might pass private information onto others, or may act upon the information in a manner harmful to the information provider.
In this paper we seek to provide a trust based decision mechanism for assessing the positive and negative effects of information release to an agent. Using our mechanism, first discussed in Bisdikian2013 and expanded here, the agent can decide how much information to provide in order to maximise its own utility. We situate our work within the context of a multiagent system. Here, an agent must assess the risk of divulging information to a set of other agents, who in turn may further propagate the information. The problem the agent faces is to identify the set of information that must be revealed to its neighbours (who will potentially propagate the information further) in order to maximise its own utility.
In the context of a multiagent system, the ability of an agent to assess the risk of information sharing is critical when agents have to reach agreement, for example when coordinating, negotiating or delegating activities. In many contexts, agents have conflicting goals, and interagent interactions must take the risk of a hidden agenda into account. Thus, a theory of risk assessment for determining the right level of disclosure to apply to shared information is vital in order to avoid undesirable impacts on an information producer.
As a concrete example, consider the work described in Chakraborty2012, where information from accelerometer data attached to a person can be used to make either whitelisted inferences — that the person desires others to infer, or blacklisted inferences — which the person would rather not reveal. For example, the person may wish a doctor to be able to determine how many calories they burn in a day, but might not want others to be able to infer their state (e.g. sitting, running or asleep). The person must thus identify which parts of the accelerometer data should be shared in order to enable or prevent their white or blacklisted inferences. While Chakraborty2012
examined how inferences can be made (e.g. that the sharing of the entropy of FFT coefficients provides a high probability of detecting activity level and low probability of detecting activity type), this work did not consider the
impacts of sharing such information when it is passed on to others.In this paper we focus on the case where we assume that black and whitelisted inferences can be made by other agents within a system, and seek to identify what information to provide in order to obtain the best possible outcome for the information provider.
To illustrate such a scenario, let us consider a governmental espionage agency which has successfully placed spies within some hostile country. It must communicate with these spies through a series of handlers, some of which may turn out to be doubleagents. It must therefore choose what information to reveal to these handlers in order to maximise the benefits that spying can bring to it, while minimising the damage they can do. It is clear that the choices made by the agency depend on several factors. First, it must consider the amount of trust it places in the individual spies and handlers. Second, it must also take into account the amount of harm these can do with any information it provides to them. Finally, it must consider the benefits that can accrue from providing its spies with information. The combination of the first and second factors together provide a measure of the negative effects of information sharing. Now when considering the second factor, an additional detail must be taken into account, namely that the information recipients (i.e. the spies) may already have some knowledge which, when combined with the information provided by the agency, will result in additional unexpected information being inferred. Therefore, the final level of harm which the agency may face depends not on the information it provides, but instead on the undesired inferences which hostile spies can make.
The remainder of this paper is structured as follows. In Section 2 we describe our model, outlining the process of decision making that an agent performs in the presence of white and blacklisted inferences. We concentrate on a special case of communication in multiagent systems, and show how such a case can be reduced to communication between an information provider and consumer (Section 3). We describe the decision procedure in Section 4. Section 5 provides a numeric example of the functioning of our system. We then contrast our approach with existing work in Section 6, and identify several avenues of future work. Section 7 concludes the paper. Appendix A
discusses the relevant properties of this approach when considering continuous random variables: in the following the will mainly focus on the case of discrete random variables.
2 The Effects of Information Sharing
We consider a situation where an information producer shares information with one or more information consumers. These consumers can, in turn, forward the information to others, who may also forward it on, repeating the cycle. Furthermore, since a consumer may or may not use the information provided as expected by the provider, the producer must assess the damage it will incur if the provided information is misused. The decision problem faced by the producer is to therefore identify an appropriate message to send to a consumer which will achieve an appropriate balance between desired and undesired effects. We assume that once a information is provided, the producer is unable to control its spread or use further .
We begin by describing a model of such a system. As part of our notation, we use uppercase letters, e.g. , to represent random variables (r.v.’s); lowercase letters, e.g. , to represent realisation instances of them, and and
to represent the probability distribution and density of the r.v.
, respectively; and to represent the probability and conditional probability of discrete random variables respectively.We consider a set of agents able to interact with their neighbours through a set of communication links, as embodied by a communication graph or network. We assume that each agent knows the topology of this network. We introduce the concept of a Framework for Communication Assessment FCA that considers the set of agents, the messages that can be exchanged, the communication links of each agents, a producer that is willing to share some information, and the recipients of the information, which are directly connected to the producer within the communication graph.
Definition 1
A Framework for Communication Assessment (FCA) is a 5ple:
where:

is a set of agents;

is the set of communication links among agents;

is the set of all the messages that can be exchanged;

is the producer, viz. the agent that shares information;

is a message that is sent by the producer and whose impact is being assessed;

is the set of consumers.
Given a framework , will make use of the procedure described in this paper to determine how to share information. This information sharing decision seeks to identify a degree of disclosure for the original message  reducing the information provided according to this degree of disclosure results in a derived version of the original message, which (informally) conveys less information than the original. As an example, if the agency knows that “country A is going to invade country B”, and is deciding whether to communicate this, a message of the form “country B is going to be invaded” is a derived message, obtained when the degree of disclosure is less than 1. We do not specify the exact mapping and the corresponding mathematical properties between the degree of disclosure and the derived message in this work, leaving this as a future avenue of research.
Definition 2
Given a set of agents, a message , , is the degree of disclosure by which agent will send the message to agent , where implies no sharing and implies full disclosure between the two agents. We define the disclosure function as follows:
accepts a message and a degree of disclosure (for that message) as its inputs, and returns a modified message (referred to as the disclosed portion of the original message).
In this paper, we consider a specific version of degree of disclosure, interpreting as a conditional probability by which agent will modify message into a new message when communicating with agent :
where is a modified message of . Note that in this interpretation (if there are multiple messages with the same degree of disclosure relative to , the disclosure function will select one among them randomly). A more sophisticated notation of message disclosure level and its probabilistic interpretation, such as separating the models of measuring the relative content level of messages and the corresponding conditional probabilities of message transformation during communication, will be left for our future work.
Given a FCA, the decision whether or not to share the information with the recipient must consider the impact that the information recipient can incur to the producer. We assume that agents within the system are selfish — an information provider will only share information if doing so provides it with some benefit but with the damage being as less as possible. However, such a benefit and damage may be uncertain. Therefore, when sharing information, the producer not only considers the benefit it obtains, but must also considers potential negative side effects based on the following:

the probability that an agent in possession of the message will forward it onward;

the levels of disclosure of messages exchanged between two agents;

the ability of each agent to infer knowledge from the received (disclosed) message;

the impacts (i.e. positive and negative effects) that the inferred knowledge has on the information producer.
The following definition therefore models the uncertainty of the impact of sharing information via a random variable.
Definition 3
Given a FCA , let , be a r.v. which represents the impact agent receives when sharing the message with a degree of disclosure with agent . is called the space of impact. can either be

a continuous random variable whose distribution is described by and , or

a discrete random variable whose probability is where .
The central theme of this paper is centred around Definition 3. More specifically, we focus on 1) how to derive the distribution of impact from the disclosure degree of messages; 2) how to evaluate the positive and negative effects of the impact; and 3) how to make decisions regarding the disclosure degree of messages based on impact.
3 Communication Networks
We now turn our attention to communication between agents who must send information via an intermediary. We show that under some special conditions, such communication can be abstracted as direct communication utilising a different degree of disclosure. In order to show this result, we introduce two operators combining disclosure degrees when information is shared in this way. The first operator discounts the degree of disclosure based on agents within the message path, while the second operator fuses information which may have travelled along multiple paths.
Figure 1 depicts the simplest case where the first operator can be applied. Let us suppose that there are three agents and that plays the role of the producer of the information , that is shared with with a degree of disclosure (the message sent to is . then shares what it received () with , sending it a message derived from a new degree of disclosure which applies to instead of . Concerning our scenario, it can be the case that the agency () cannot send a message directly to its agent in the hostile country. Therefore it has to deliver the message through an intermediate agent . Unfortunately, the agency knows that there is a possibility that will propagate the message towards , who is an hostile spy.
The operator we introduce in Definition 4 computes the equivalent disclosure degree () which should be used for deriving a new message from that can send to and such that . We require the introduced operator to be (i) transitive, and (ii) that the returned value should not be greater than . This “monotonicity” requirement is built on the intuition that does not know the original information , just its derived version . It also rests on the critical assumption that does not make any kind of inference before sharing its knowledge. For instance, if shares with a message with a degree of disclosure of 0.7 of the original message , and shares with a message with a degree of disclosure of 0.5 of the message it received, one could argue that a message with a degree of disclosure of 0.35 of the original message . The monotonicity requirement is strictly related to the assumption that an agent cannot share what it derived: this requirement will be relaxed in future developments of the FCA framework.
Definition 4
Given a FCA , for any three agents , , , let be the message sent by to (see Fig. 1). Then the message sent by to is:
where

;

is a transitive function such that

.
Our second operator deals with the case where there are multiple path that a message can traverse before reaching an information consumer. This is the case depicted in Fig. 2, where shares the same information with and , but filters these using different degrees of disclosure and for the two agents respectively. Following this, and share the information they obtained with . For instance, it can be the case that the agency, trying to reach its agent, shows information with both and , hoping that somehow the message will eventually reach the agent. As before, the agency is aware that both and have contacts with an enemy spy .
We therefore define a transitive operator (in Definition 5 below) which when used with the operator defined in Definition 4 above, provides us with an equivalent degree of disclosure as if the message was sent directly from to (). This operator must honour the monotonicity requirement — the derived degree of disclosure cannot be greater than the minimum of the disclosure degrees used for sharing the information with () and with (). As for the previous operator,we assume that intermediate agents do not make any inferences before sharing information.
Definition 5
Given a FCA , , , , , , let be the message sent by to , and let be the message sent by to (see Fig. 2). Then there is a merge function that merges the message sent by to , with the message sent by to as follows:
where

is defined as:

is a transitive function s.t.

.
Having described the properties required of and , we do not instantiate them further. However, any operators which satisfy these properties allow us to treat the transmission of information between any two agents in the network as a transmission between directly connected agents, subject to changes in the degree of disclosure. Computing this degree of disclosure requires to make some assumptions, which can be computed from specific and instantiations. In the remainder of the paper, we therefore consider communication only from an information source to the information consumer (ignoring intermediary agents), and deal with a single degree of disclosure used in this communication. This “derived” degree of disclosure is computed using a specific instantiation of and by considering the message path through the network from the information producer to the consumer..
After a message from producer propagates through the communication work, we obtain a distribution over the messages that a consumer can receive: for each
. We represent this distribution through vector notation:
Here, is the size of the message space. If the final message agent can receive is deterministic and it is , then the final message distribution becomes a unit vector:
whose th entry is and other entries are zeros.
4 The Decision Process
We now turn our attention to the core of the decision process for assessing impact, which is based on the following definitions of inferred knowledge and of the impact that inferred knowledge has on .
Definition 6
Given a FCA , let be the level of disclosure of the message that the producer eventually discloses to consumer . We describe the amount of knowledge that can infer from the message as a random variable which is either

a continuous random variable whose cumulative distribution and density function are and respectively; or

a discrete random variable whose distribution is
is called the space of inference.
As we have previously discussed, the provision of information enables a recipient to make inferences, which have an effect, or impact on the information producer. We capture this impact as a point within an impact space . Since the producer does not have full information regarding a consumer’s knowledge, we model impact probabilistically.
Definition 7
Given a FCA , let be the inference that a consumer can make when the producer disseminates the message through the communication network. We define the impact of the inferences that agent on the producer as a real random variable which is either

a continuous random variable whose cumulative distribution and density function are and respectively, or

a discrete random variable whose distributed is
The range is called the impact space.
We concentrate on two types of impact, namely the positive and negative effects of the inferences made by the consumer on the producer. We respectively refer to these as the benefits and risks to the producer. Unlike standard utility theory, we do not, in general, assume that benefits and risks are directly comparable, and these therefore serve as two dimensions of the impact space.

Benefit : Let be the producer ’s evaluation of the benefit of inferences a consumer can make following the receipt of a message. Following Definition 7, we model benefit via either a continuous random variable with cumulative distribution and density function and respectively; or a discrete random variable whose distributed is .

Risk : Let be the producer ’s evaluation of the risk of the harm of inferences a consumer can make following the receipt of a message. Following Definition 7, we model risk vie either a continuous random variable with cumulative distribution and density function and respectively; or a discrete random variable whose distributed is .
In Bisdikian2013 we show that several interesting properties holds in the case of continuous r.v.’s. However, in this paper we will extend the proposal previously discussed in the domain of discrete r.v.’s. An interested reader can find in Appendix A.
The random variable implicitly captures an aspect of the producer’s trust in the consumer, as it reflects the former’s belief that the consumer will utilise the information in the manner it desires. Similarly, the random variable captures the notion of distrust in the consumer, describing the belief that the consumer will utilise the information in a harmful manner. Note that when considering repeated interactions, these random variables will evolve as the producer gathers experience with various consumers. In such a situation each of them could represent either a prior distribution or a steady state. In the current work, we assume that a steady state has been reached, allowing us to ignore the problem of updating the distribution. When the context is clear, we drop the subscript notation in our r.v.’s.
Figure 3 provides a graphical interpretation of inference (Definition 6) and impact (Definition 7), distinguishing between risk and benefit, when a producer shares a message with a degree with a consumer . Given , the consumer infers drawn from all possible inferences. This results in an impact drawn from the space of possible impacts, which is conditioned on the inference made by the consumer. This impact can be either a risk () or a benefit ().
By assuming that the impact is independent of the degree of disclosure of a message given the inferred information , we can represent the inference distribution and the impact distribution as conditional probability tables in matrix notation.
The inference distribution of agent corresponds to the following matrix:
Each entry represents the probability that agent makes inference when receiving message . The th column of corresponds to the inference distribution that can be made from receiving the disclosed message . Note that every column will sum up to as we require that for a valid conditional probability. The size of matrix is where and . As is a valid conditional probability table, there are number of independent parameters in .
Similarly, the impact conditional probability of agent can be represented by an impact matrix:
Each entry represents the probability that agent cause impact to the producer when can makes inference . The th column of corresponds to the impact distribution that can occur if the th inference can be reached. Again, every column will sum up to as . where and . has independent parameters.
Corollary 1
Assume that the impact is independent of the degree of disclosure given the inferred information . Given , and , the distribution of the impact that agent can make to the producer is be computed by:
where
whose entry is the probability in which agent causes the th impact to the producer . is the probability marginalizing over all possible messages and inferences. is called the the impact distribution vector of a consumer .
Corresponding to the impact distribution vector , we layout the impact evaluation into a vector , defined as follows.
where . Entry in is the th impact that agent can make to the producer. is called the impact vector by agent .
Definition 8
Given a FCA , (the impact probability vector) and (the impact vector) regarding agent . The expected impact regarding agent is
Since the impact can be either a benefit or a risk, can be specialised in (benefits probability matrix) or (risk probability matrix). Correspondingly, the distribution of impact can be either a benefit distribution or a risk distribution, (benefits distribution vector) or (risk distribution vector); the impact vector can be either a benefit vector or a risk vector ; the expected impact can either be the expected benefit or the expected risk . For notation clarity, we explicitly list benefit vector and risk vector respectively as follows.
where . Entry in is the th benefit the producer can obtain regarding agent .
where . Entry is the th risk the producer is concerned with regarding agent .
We can now define the net benefit of sharing information as follows.
Definition 9
Given a FCA , the expected benefit and the expected risk regarding an agent , the net benefit for the producer to share information with is described by (assuming that the values for risk and benefit can be compared and are scaled appropriately for comparison):
With an average, the expected net benefit is defined as:
Corollary 2
Assume that the impact is independent of the degree of disclosure given the inferred information . For agent , given the message disclosure distribution ; the inference conditional distribution ; the benefit conditional distribution ; the risk conditional distribution ; the benefit evaluation ; and the risk evaluation . The expected benefit, expected cost, and expected net benefit that agent can provide to the producer can be respectively computed as follows:
(1)  
(2)  
(3) 
Assume that there is a bijection between the spaces of benefit impact and cost impact, namely where is a bijection. And and have the same distribution after the mapping represented by the matrix . Then the expected net benefit can be simplified:
5 An Example
To illustrate our proposal, let us suppose that British Intelligence () has two spies, James and Alec, in place in France. James is a clever agent, very loyal to Britain, while Alec is not as smart, and his trustworthiness is highly questionable. At some point, informs the spies that in three weeks France will be invaded by a European country: it hopes that James and Alec can recruit new agents in France thanks to this information. However, the intelligence agency does not specify how this invasion will take place, although they already know it is very likely to come from the East. However, both James and Alec are aware of the following additional pieces of information, namely that Spain, Belgium and Italy have no interest in invading France, while Germany does. does not want to share the information that the invasion will be started by Germany, because they are the only ones aware of these plans, and a leak would result in a loss of credibility for the UK government. Therefore, British Intelligence has to assess the risk in order to determine whether or not it is acceptable to inform its spies that France will be invaded by an European country.
Formally, we can represent the above example , where:

;

;

with:

: France will be invaded by Germany;

: France will be invaded by a European country;


;

;

are the consumers.
Suppose that . In other words, uses the same disclosure degree with both James and Alec.
In addition, , where

: France will be invaded by Germany;

: France will be invaded by a European country;
We focus on the case where is sharing with Alec and James.
To simplify the formalisation of this scenario, we will consider only discrete random variables, allowing probability mass functions to be used in place of densities. As illustrated in Figure 4, inferences can be or , with believing that an information consumer will make such an inference with probability and respectively if receiving message , and with probability and respectively if receiving message . Clearly, our intention is to keep the original message () confidential, while sharing . The inference is “France will be invaded by a European country”, while is “France will be invaded by Germany”.
For each of the possible inferences there are two levels of risk, denoted with and . The risk, as well as the inferences, are independent of the agent. For simplicity, we associate a utility cost with these two outcomes, of 10000 and 100000 respectively, as shown in the Figure. In other words, the risk is and represents the risk of either John or Alec get captured when they are trying to recruit new agents, while the impact is and represents the loss of credibility for the UK government due to the sharing of information with enemy. For the sake of the example, we will consider a fixed utility for benefit of .
With the probabilities shown in the figure, an information producing agent is characterised by the tuple where (to be read as “the probability that agent will infer given ”), , and . Therefore, when we only concern about the result when message is eventually delivered to the agents, James’ behaviour can be characterised by the tuple , while Alec is characterised by the tuple , which shows that if Alec infers that the invader will be the Germany, it is more likely that he will defect, leading to the worst possible impact for the information provider .
According to the formalism presented in this paper, we have the following space of messages:
Assume that is the final message received by the agents, we have the following vectors of eventual disclosure degree:
Then, according to Figure 4, the space of inference are composed of two possible inferences:
Clearly, we have to distinguish between John and Alec’s ability to make inferences:
while
The risks, independent of the agent with whom information is shared is as follows:
Taking the inferred messages into account, we obtain the following.
while
Moreover, since we considered a fixed benefit
these values depends on the inferred messages with the following distribution:
When agent shares message at disclosure level with a particular consumer agent , the average risk anticipated by is given by
(4) 
As expected, we obtain an expected risk for James of (), while for Alec, we obtain ().
Similarly, the expected net benefit for the spies is as follows.
(5) 
Since is desired, we must necessarily have:
(6) 
The right hand side of the expression above represents the probability of experiencing impacts at level (see Fig. 4), which immediately necessitates that . In other words, the minimum valuation of the information should be at least as large as the minimum impact expected to occur, cf. Definition 9.
From Equation 6, which assesses the risk and the trust model (the tuples in this discrete case), we can see that can share the information that France is going to be invaded with James (), but not with Alec ().
6 Discussion and Future Work
The work described in this paper makes use of an unspecified trust model as a core input to the decision making process. Our probabilistic underpinnings are intended to be sufficiently general to enable it to be instantiated with arbitrary models, such as josang02beta; teacy06travos. Unlike these models, our work is not intended to compute a specific trust value based on some set of interactions, but rather to decide how to use the trust value output by the models.
The use of trust within a decision making system is now a prominent research topic, see Castelfranchi2010; Urbano2013 for an overview. However, the most work in this area assumes that agents will interact with some most trusted party, as determined by the trust model. This assumption reflects the basis of trust models on action and task delegation rather than information sharing. burnett11trust is an exception to this trend; while still considering tasks, Burnett explicitly takes into account the fact that dealing with a trusted party may be more expensive, and thus leads to a lower utility when a task has relatively low potential harmful effects. Burnett’s model therefore considers both risk and reward when selecting agents for interaction. However, Burnett situated his work using utility theory, while the present work allows for a more complex impact space to be used.
Another body of work relevant to this paper revolves around information leakage. Work such as mardziel11dynamic considers what information should be revealed to an agent given that this agent should not be able to make specific inferences. Unlike our work, mardziel11dynamic does not consider the potential benefits associated with revealing information.
Finally, there is a broad field of research devoted to assessing risk in different contexts. As summarised in Wang2011, which compares seven definitions of trust^{1}^{1}1Although not considered in Wang2011, the definition provided in Castelfranchi2010 follows the others., the notion of risk is the result of some combination of uncertainty about some outcome, and a (negative) payoff for an intelligent agent and his goals. While this definition is widely accepted (with minor distinctions), different authors have different point of view when it comes to formally define what is meant by uncertainty. In Kaplan1981, instead of providing a formal definition of risk, the authors introduce a scenariobased risk analysis method, considering (i) the scenario, (ii) its likelihood, and (iii) the consequences of that scenario. They also introduce the notion of uncertainty in the definition of likelihood and of consequences. Doing so allows them to address the core problem of such models, viz. that complete information of all possible scenarios is required. The connection between risk and trust has been the subject of several studies, e.g. Tan2002 shows a formal model based on epistemic logic for dealing with trust in electronic commerce where the risk evaluation is one of the components that contribute to the overall trust evaluation, Das2004 proposes a conceptual framework showing the strict correspondence between risk and some definition of trust, Castelfranchi2010 discusses the connection between risk and trust in delegation. However, to our knowledge our work is the first attempt to consider risk assessment in trustbased decision making about information sharing.
There are several potential avenues for future work. First, we have assumed that trust acts as an input to our decision process, and have therefore not considered the interplay between risk and trust. We therefore seek to investigate how both these quantities evolve over time. To this aim, we also will investigate the connections between the shown approach and those based on game theory like
Goffman1970, as suggested by (van der Torre, personal communication, 1st Aug 2013) during the presentation of Bisdikian2013. Another aspect of work we intend to examine is how the trust process affects disclosure decisions by intermediate agents with regards to the information they receive. We note that agents might not propagate information from an untrusted source onwards, as they might not believe it. Such work, together with a more fine grained representation of the agents’ internal beliefs could lead to interesting behaviours such as agents lying to each other caminada09truth. Other scenarios of interest can be easily envisaged, and they will be investigated in future work. For instance, a slightly modified version of the framework proposed in this paper can be used for determining the degree of disclosure in order to be reasonably sure that a desired part of the message will actually reach a specific agent with which we do not know how to communicate. This is the situation when an organisation tries to reach an undercover agent by sharing some information with the enemy, hoping that somehow the relevant pieces of information will eventually reach the agent. Our long term goal is to utilise our approach to identify which message to introduce so as to maximise agent utility, given a knowledge rich (but potentially incomplete or uncertain) representation of a multiagent system.7 Conclusions
In this paper we described a framework enabling an agent to determine how much information it should disclose to others in order to maximise its utility. This framework assumes that any disclosure could be propagated onwards by the receiving agents, and that certain agents should not be allowed to infer some information, while it is desirable that others do make inferences from the propagated information. We showed that our framework respects certain intuitions with regards to the level of disclosure used by an agent, and also identified how much an information provider should disclose in order to achieve some form of equilibrium with regards to its utility. Potential applications can be envisaged in strategic contexts, where pieces of information are shared across several partners which can result in the achievement of a hidden agenda.
To our knowledge, this work is the first to take trust and risk into account when reasoning about information sharing, and we are pursuing several exciting avenues of future work in order to make the framework more applicable to a larger class of situations.
References
Appendix A The Case of Continuous Random Variables
By utilising Definitions 6 and 7 we can describe the impact of disclosing a message to the consumers on the producer .
Proposition 1
Given a FCA ; a consumer ; and the message received by . Let be the information inferred by according to the r.v. (with probability ). Then, assuming that the impact is independent of the degree of disclosure given the inferred information , expects an impact described by the r.v. with density:
Proof
The density function is easily derived from the distribution since . ∎
Moreover, any time we need a single value characterisation of a distribution, we can exploit the same idea of descriptors of a random variable, by introducing descriptors for trust and risk.
Definition 10
Let be a function defined on , and be a level of inference. We define
(7) 
to be the trust descriptor induced by .
We can do the same to obtain a impact descriptor:
Definition 11
Let be a function defined on , and be a level of disclosure.We define
(8) 
to be the impact descriptor induced by .
Typical
include the moment generating functions, such as
, etc., and entropy for the density of some r.v. . In the following we use the expectation as the risk descriptor, leaving consideration of other possible functions for future work.Finally, let us illustrate two notable properties of our model. The first one is with regards to the case where a consumer can derive the full original message, which, unsurprisingly, leads to the worst case impact.
Proposition 2
When a consumer is capable of gaining maximum knowledge, then , where is the Dirac delta function, and , i.e., the risk coincides with the 1trust (Definition 7).
Proof
By the definition of the inference r.v. , when is believed to gain maximum knowledge then the density carries all its weight at the point for all . Hence, and it follows from the definition of the Dirac delta function, see also Prop. 1
(9) 
∎
The second property pertains to the case where agent shares information with more than one consumer. Such situations are typically nonhomogeneous as the trust and impact levels with regards to each consumer are different. Clearly, it is beneficial to identify conditions where these impacts balance (and, hence, indicate crossover thresholds) across the multiple agents.
For two agents having corresponding inference and behavioural trust distributions and , , for the shared information to have similar impact, and should be selected, such that the following holds.
Comments
There are no comments yet.