Investigating the Effect of Attributes on User Trust in Social Media

by   Dr. Jamal Al Qundus, et al.
Freie Universität Berlin

One main challenge in social media is to identify trustworthy information. If we cannot recognize information as trustworthy, that information may become useless or be lost. Opposite, we could consume wrong or fake information with major consequences. How does a user handle the information provided before consuming it? Are the comments on a post, the author or votes essential for taking such a decision? Are these attributes considered together and which attribute is more important? To answer these questions, we developed a trust model to support knowledge sharing of user content in social media. This trust model is based on the dimensions of stability, quality, and credibility. Each dimension contains metrics (user role, user IQ, votes, etc.) that are important to the user based on data analysis. We present in this paper, an evaluation of the proposed trust model using conjoint analysis (CA) as an evaluation method. The results obtained from 348 responses, validate the trust model. A trust degree translator interprets the content as very trusted, trusted, untrusted, and very untrusted based on the calculated value of trust. Furthermore, the results show different importance for each dimension: stability 24 credibility 35



There are no comments yet.


page 1

page 2

page 3

page 4


Learning User Embeddings from Temporal Social Media Data: A Survey

User-generated data on social media contain rich information about who w...

Seeing and Believing: Evaluating the Trustworthiness of Twitter Users

Social networking and micro-blogging services, such as Twitter, play an ...

Multimodal Detection of Information Disorder from Social Media

Social media is accompanied by an increasing proportion of content that ...

Information Credibility in the Social Web: Contexts, Approaches, and Open Issues

In the Social Web scenario, large amounts of User-Generated Content (UGC...

Tending Unmarked Graves: Classification of Post-mortem Content on Social Media

User-generated content is central to social computing scholarship. Howev...

SRLF: A Stance-aware Reinforcement Learning Framework for Content-based Rumor Detection on Social Media

The rapid development of social media changes the lifestyle of people an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Users consume information when they have trust in it. One main challenge in social media is how to identify trustworthy information. For instance, relevant information such as storm warning or medical instruction could not be considered by users, if it is not recognized as trustworthy. Usually, users look at properties (e.g. author, reviews, etc.) to take decision to trust the information. However, many questions arise when it comes to which properties are relevant and how important they are for the user to consume this information.

Social media (SM) have many users, which makes it well suitable for examining user activities on the information provided. Therefore, we considered the SM Genius ( as a case study to measure user’s willingness to trust the information provided on this platform. Interactively, users on Genius create annotations that serve as placeholders for the interpretations of texts, especially lyrics and literature. Annotations provide editing functions such as voting, sharing, adding comments, etc. Participation in this platform is described by certain activities. These activities are linked to certain user authorizations (e.g. roles: whitehat, artist, editor, etc.). Based on these authorizations, a user can perform certain activities and earn Intelligence Quotient (IQ)111Intelligence Quotient is a counter of points awarded for activities on Genius., which indicates the experience required for authorization and acceptance of content [1].

The focus of this article is to determine the properties that are important for user trust in the user-generated content environment (i.e. any content generated by the user on Genius that is an annotation, a comment, or a modification).

Existing research [2, 3, 4, 5]

tackles this problem of trust by verifying the history of the generated content, reputation and algorithms for detecting vandalism. In the contract, we estimate user preferences on content properties that are relevant in making decision to consume (trust) the information. This gives us an idea of how we can present information in social media using a template that helps identify trusted information.

To obtain such a template, the user’s willingness to trust should be measured by (1) simulating a number of templates that include different properties and (2) estimating the user’s choices. Measuring user’s willingness leads to the construction of a trust model that quantifies the value of trust and at the same time embodies in its structure the construction of the required template. This can be evaluated with the use of the conjoint analysis (CA) [6] method, which simulates the decision-making process of consumers when choosing products in real life.

To build our model, we analyzed first, the data collected from Genius based on user activities (e.g. annotation, voting, comments). Then, we select the metrics used in Genius that correspond to user activities (for instance, ”annotation” corresponds to ”annotation IQ1”, , ”vote” to ”edit IQ1

”). Finally, we looked for a correlation between the metrics defined in the dimensions of the existing trust models in the literature and those defined in Genius. With this literature review, we classified the metrics into three dimensions namely: stability, credibility and quality. These dimensions are then integrated into our trust model. This trust mode can be used as well for other social media having the same content properties.

Our main emphasis in this paper is to provide a reliable assessment of the selected dimensions and their acceptance by web users in general. This can be conducted by estimating the user choices using a Discrete Choice Conjoint analysis (DCC), which is a form of CA evaluation method.

The paper is structured as follows: Section II provides a brief overview of relevant works. Section III describes the dimensions of the trust model and an illustrative example to compute trust. Section IV presents the preparation of the survey. Section V reports and discusses the findings. Section VI contains the summary and concludes with proposals for further investigation.

Ii Related Work

Dondio et al. propose a Wikipedia Trust Calculator (WTC) consisting of a data retrieval module that contains the required data of an article. A factor-calculator module calculates the confidence factors. A trust evaluator module transfers the numerical confidence value in a natural language declaration using constraints provided by a logic conditions module [7]. This approach refers exclusively to Wikipedia and cannot be transferred into other domains such as social media. In addition, the aim here is to detect vandalism and not trust in our definition. The trust model of Abdul-Rahman and Hailes is based on sociological characteristics. These are trusted beliefs between agents based on experience (of trust) and reputation (came from a recommended agent) combined to build a trusted opinion to make a decision about interacting with the information provided [8]. This approach combines reputation and agents together to build trust. These metrics are not available in our domain.

These two works are closely linked to our work. Our approach is comparable to that of Dondio et al., and in particular their work has inspired the calculation of the stability dimension. The authors build the stability based on the change (defined as edit) in the text length of an article. While the stability dimension in our work is built on any type of editing (vote, suggestion, creation, etc.) to an annotation. In addition, we have adapted the trust degree translator from Abdul-Rahman and Hailes [8]. Based on data base analysis, we manually defined its constraints, that are used for interpretation of the numeric trust value into a human-readable language.

Cho et al. investigated trust in different contexts and discussed trust from various aspects. The authors employed a survey to investigate social trust from interactions/networks, and captured of quality of service (QoS) and quality of information (QoI) depending on a relation between two entities (trustor and trustee) [9]. However, this is an examination of trust and reputation systems in online services [10]. These works and others must be placed in a restricted domain to find a relationship between the communicating entities. However, this is not always possible in an unlimited domain like social media. Different from the prior works, this survey brings together the aspects of trust from a specific, but open domain and let entities evaluate them from outside of this domain. Additionally, we suggest an advanced equation for measuring trust.

Iii Trust Model Construction

We conducted investigation on the social media Genius as a case study to build our trust model, as follows: First, we analyzed the data collected from Genius based on user activities (e.g. annotation, voting, comments). Then, we select the metrics used in Genius that correspond to user activities (for instance, ”annotation” corresponds to ”annotation IQ1”, ”vote” to ”edit IQ1”). Furthermore, other metrics are considered such as the metric ”author role” (e.g. editor, or whitehat) which is not considered as an activity but as important metric in Genius (for more detail see Genius technical report [11]). Finally, we looked for a correlation between the metrics defined in the dimensions of the existing trust models in the literature [7, 12, 13, 14, 15] and those defined in Genius. For instance, the dimension credibility comprised the metrics set: user role, user IQ, attribution and annotation IQ. With this literature review, we classified the metrics into three dimensions namely: stability, credibility and quality. These dimensions are then integrated into our trust model.

The trust model classifies annotations into four classes [8] called trust degrees illustrated in Table I

. These classes were obtained by the database analysis based on the Empirical Cumulative Distribution Function (ECDF)

[16]. Next, we present the formulas used to compute trust.

Trust Degree Percentage Edits number Edits IQ User IQ
vt 25% 5 35 1000
t 31.25% 2 to 5 5 to 35 0 to 1000
u 6.25% 0 to 2 0 to 5 -100 to 0
vu 37.5% 0 0 -100
  • Tab.I illustrates the trust degree translator for the interpretation of the individual statement trust classes. vt = very trusted, t = trusted, u = untrusted, vu = very untrusted. Percentage results from the ECDF applied data analysis observed on Genius, and illustrates the distribution of statements’ (annotations) trust classes in the data set.

TABLE I: Trust Degree Translator


is calculated based on the dimensions stability, credibility and quality. The trust value obtained is then interpreted by the trust degree translator to identify the annotation class: very trusted, trusted, untrusted and very untrusted.


, and are the importance factors of each dimension.

  • Stability () is represented by the annotation edits’ distance (see Equation 3), which represents several content modifications in a time interval. For example, the interval could be between the initial time stamp of the annotation and the current time. (see Equation 2) specifies the number of edits at the time stamp t.


    Here, is a set of all integers.

  • Credibility () refers to correctness, authorship and the depth in meaning of information. We consider the type of user activity on an annotation as editsType. There are complex activities (e.g. annotation creation) that require agility from the user during execution. We should note that the ranks of these complex activities in Genius are higher than so-called simple activities (e.g. annotation voting). In addition, we applied a User Credibility Correction Factor (UCCF), which is calculated based on the users role222Genius members have roles that differ in the permissions assigned to them. The IQ numbers earned for an activity also depend on the role type., user IQ333The overall earned IQ count of a user. and attribution444Attribution is the percentage of edits made by a user. for modifying the credibility status.


    Where UCCF= foreach author a: a.attribution a.rolePower. And a.rolePower = a.role a.roleFactor a.IQ.

  • Quality () is calculated exactly like C, except for the restriction to n-top active users 555 is a number that the observer can freely select., who are ordered based on their attribution.


Before presenting an example that illustrates how the trust is calculated, we introduce the terms utility and importance. The utility or part-worth is a measure of how important an attribute level is for a trade-off decision by the user. Whereas the relative importance of an attribute is the delta percentage compared to all utilities. Each time a respondent makes a choice, an accumulator compiles the numbers. These numbers indicate how often a level has been selected. The algorithm used for calculating the utilities is logit model

[17] combined with a Nelder-Mead Simplex algorithm [18].

The utility () of an attribute level () is calculated by the distance from its selected value () to the minimum () level selected value of the same attribute () as shown in the following Equation 6:


The importance () of an attribute () is calculated by the difference between the maximum selected number () and the minimum selected number () of this attribute level, divided by the sum of such differences over all attributes (see Equation 7).


Iii-a Illustrative Example

In this section, we present an example that illustrates how we apply the results of CA in evaluating the dimensions and as a consequence of that, computing trust of an annotation.

A discrete choice conjoint analysis (DCC) provides a quantity666Number depends on the conjoint analysis design. of tasks. A task consists of a set of concepts, each concept represents a certain number of attribute levels. Users select a concept that they would trust in reality. Figure 9 illustrates a task used in our conjoint design. Each task concept contains the attributes Comments, Reader Rating and Author Rating and their randomly generated levels values. The attributes act in place of the trust dimensions.

Table II provides an example of the collected numbers of user trade-offs. The columns Level, Selected and Offered are predefined. We only explain the calculations for one attribute Comments, since the calculations for the attributes Reader Rating and Author Rating are analog.


Equation 6 is applied as follows: = Comment, = level, = maximum level selected value, = minimum level selected value, = 0

relative Importance

(see Equation 7) is applied as follows:

, , .

777=0.33, =0.39


() can be now calculated based on the dimensions’ Equations 3, 4, 5 and 1 as follows:

Let the number of edits of an annotation be 50 from the creation time to the current time. From these edits are 10 complex edits (CE) (e.g. content modification) and 40 simple edits (SE) (e.g. voting). The edits IQ equals 30. The users’ roles are (editor (25), whitehat (3) and staff (38)), the sum total of users’ (authors’) IQ equals 160 (10, 30 and 120, respectively) and their attributions ( 70% with 2 CE and 7 SE, 28% with 7 CE and 30 SE and 2% with 1 CE and 3 SE, respectively). The n = 2 for the n-top active user. Based on this input the stability, credibility and quality can be calculated, as follows:

Stability =

Credibility =

where UCCF = foreach author a: a.attribution a.rolePower

And a.rolePower = a.role a.roleFactor a.IQ


EditsTypes = edit IQ =


where UCFF’ =

EditsTypes’ = editIQ =

Trust =

Trust degree translator interprets the trust value as very trusted, as trusted, as untrusted and as very untrusted.

Fig. 1: presents a task that illustrates one step in the conjoint analysis profile. This task is displayed to the respondents to make a trade-off on the provided concepts. A concept consists of attributes (Comments, Reader Rating and Author Rating) and randomly generated levels values combined to alternatives.
Attribute Level Selected Utility Offered rel.Importance
Comment 0 2 0 5% 29%
2 33 2 24%
5 44 5 31%
10 61 10 40%
Reader Rating 0 5 0 4% 33%
10 24 10 17%
30 39 30 28%
70 72 70 51%
Author Rating -100 1 0 5% 39%
0 7 100 7%
1000 52 1100 37%
2000 80 2100 51%
  • Tab.II gives an example of calculating the attributes relative importance. Attribute = the property of a statement, Level = one possible value an attribute can take, Selected = the number of selection frequency of respondents, Utility = level’s important to the respondents choice decision, Offered = display frequency to the respondents, relative Importance = measure of how preferred an attribute is to the respondents choice decision.

TABLE II: Attributes and Levels Calculation Example

Iv Methodology

Our approach applies DCC as follows: Using e-mail, we announced a link to the online survey in Arabic, English and German. In the DCC we described the attributes (see Table III): (1) Comments as “a number that indicates improvement edits created by other readers”, (2) Reader Rating as “a number of other readers’ approval” and (3) Author Rating as “a number of voting that the author earned for his activities in the social network”. We also stated that “The greater the number, the greater the satisfaction”. Each number represents the sum of negative and positive assertions. There are comments that were rated negatively and other comments that were rated positively by readers. Negatives were marked with minus and positives with plus numbers. Subsequently, both numbers were summed up. This applies to all properties”.

Due to the amount of information in a full-profile design ( makes 64 alternatives), a complied questionnaire would become too extensive. Therefore, we decided to use a fractional factorial-design within the factor . The conducted DCC consists of 32 concepts and the respondents were also given four alternatives to choose from in each task.

V Results and Discussion

Johnson and Orme recommend a rule-of-thumb for the minimum sample sizes for CBC modeling [19]. Where n: the respondents number, t: the tasks number, a: the number of alternatives per task and c: the highest level number over all attribute. Accordingly, our questionnaire response has a satisfactory number of participants 101010348 responses with a completion rate of 40.65%. The questionnaire received responses that were distributed over 12 countries, which supports the results to be more significant and we experienced responses from a widely distributed audience.

The areas of emphasis of the individual statements met all our expectations and are in line with the proposed theoretical trust model. The selected properties included in the dimensions is important to the users by making a decision to trust the information provided. Table III presents the best profile (levels: +10, +70 and +2000) and the worst profile ( levels: 0, 0, -100). These two results are interpreted as very trusted and the worst profile as very untrusted respectively by the trust degrees translator. As a reminder, the values of levels in the questionnaire and the numbers applied into the trust degree translator are obtained from the analysis of the database collected from Genius. This table provides information regarding the importance of the attributes and the utilities of the levels. The analysis of the respondents’ choice decisions has been conducted by the authors of this paper. This analysis provides a summary from which we can conclude the following:

  1. None of the attributes was excluded from the choice decision. All defined dimensions have significance in the proposed trust model. Importance of stability, credibility and quality are 24%, 35% and 41% respectively) (see Table III). If a dimension had turned out less significance, this would mean that it has no relevancy for the model and should not be considered. This result is an indicator that the model is accepted and confirmed by the evaluators.

  2. None of the attributes has an extremely high value. None of the dimensions alone make up the model. That would mean that we can neglect all other dimensions and focus exclusively on one. In addition, it would indicate that other components or dimensions are essential for trust in this context and our model did not consider them.

  3. The importance of the attributes is about equally dispersed. This confirms our preliminary consideration when we weighted the dimensions nearly equal (see Equation 1). Nevertheless, it was not possible to determine a more precise weighting for the trust calculation until this evaluation. In which, the respondents printed out their importance weight distribution of each dimension. Applying this weighting into the equation we can improve the calculation with coefficients that are derived from the percentages of the individual domain rankings.

  4. There is a distinct subdivision of the utilities of an attribute into four parts. The resulted subdivision is in line with the classes of the trust degrees. The levels of an attribute exhibit clear differences on how often they have been selected (see Table III). We can reclassify the distribution of the levels utilities of individual attribute in the classes. This applies to all attributes.

  5. The distinct subdivisions agree over all attributes respectively. This corresponds with the prior statement and represents an extension that the subdivision of the levels utilities on an attribute stage continues to move across all attributes levels and in the same order (see Table III). If we number the trust classes and the levels of each attribute consecutively, we realize that the first level of each attribute can be assigned to the first trust class (very untrusted) and the second attributes levels can be assigned to the second class (untrusted) and so forth. For instance, The utility (-0.90) of the first level (-100) of the attribute author rating, the utility (-0.82) of the first level (0) of the attribute reader rating and the utility (-0.67) of the first level (0) of the attribute comments, present together a concept that was the least rated once by the respondents. This concept is classified as very untrusted by our trust model. This is applied for the second, third and fourth levels of each attribute respectively.

Author Rating (quality) Reader Rating (credibility) Comments (stability)
40.85% 34.8% 24.35%
Level Utility Level Utility Level Utility
-100 -0.90 0 -0.82 0 -0.67
0 -0.39 +5 -0.18 +2 -0.04
+1000 +0.44 +30 +0.29 +5 +0.24
+2000 +0.86 +70 +0.71 +10 +0.47
  • Table III gives the attributes importance, the levels and their utilities as the measure for the respondents’ preferences. This refines the weights of the attributes acting in place of the trust model dimensions.

TABLE III: Attributes Importance and Levels Utilities Results

Vi Conclusion and Future Work

This work carried out an evaluation of the trust model using a conjoint analysis to determine the respondents’ choice. The information gained from the results confirm our trust model. In its structure is the required template included that helps identify trusted information according to user estimated preferences. This model consists of three dimensions: stability, credibility and quality which were adopted from the literature and the Genius platform. This model is intended to support the development of successful applications.

The used logit model provides effective analysis and convincing results. Nevertheless, an analysis using the hierarchical bayes would allow us to take a fresh look at the selections of individuals. With hierarchical bayes we would be able to trace the history of respondent’s decisions and possibly expose some more details about their behavior. After the respondents’ conclusions about the developed model have been drawn, it is necessary to investigate the database from which the model was created. A clustering procedure should be used to identify which text content belong to which trust category and why. The content-related parameters are to be investigated.


  • [1] J. A. Qundus, “Generating trust in collaborative annotation environments,” in Proceedings of the 12th International Symposium on Open Collaboration Companion.   ACM, 2016, p. 3.
  • [2] H. Li, J. Jiang, and M. Wu, “The effects of trust assurances on consumers’ initial online trust: A two-stage decision-making process perspective,” International Journal of Information Management, vol. 34, no. 3, pp. 395–405, 2014.
  • [3] C. L. Miltgen and H. J. Smith, “Exploring information privacy regulation, risks, trust, and behavior,” Information & Management, vol. 52, no. 6, pp. 741–759, 2015.
  • [4] H. Liu, E.-P. Lim, H. W. Lauw, M.-T. Le, A. Sun, J. Srivastava, and Y. Kim, “Predicting trusts among users of online communities: an epinions case study,” in Proceedings of the 9th ACM Conference on Electronic Commerce.   ACM, 2008, pp. 310–319.
  • [5] B. T. Adler, L. De Alfaro, S. M. Mola-Velasco, P. Rosso, and A. G. West, “Wikipedia vandalism detection: Combining natural language, metadata, and reputation features,” in International Conference on Intelligent Text Processing and Computational Linguistics.   Springer, 2011, pp. 277–288.
  • [6] R. D. Luce and J. W. Tukey, “Simultaneous conjoint measurement: A new type of fundamental measurement,” Journal of mathematical psychology, vol. 1, no. 1, pp. 1–27, 1964.
  • [7] P. Dondio, S. Barrett, S. Weber, and J. M. Seigneur, “Extracting trust from domain analysis: A case study on the wikipedia project,” in International Conference on Autonomic and Trusted Computing.   Springer, 2006, pp. 362–373.
  • [8] A. Abdul-Rahman and S. Hailes, “Supporting trust in virtual communities,” in System Sciences, 2000. Proceedings of the 33rd Annual Hawaii International Conference on.   IEEE, 2000, pp. 9–pp.
  • [9] J.-H. Cho, K. Chan, and S. Adali, “A survey on trust modeling,” ACM Computing Surveys (CSUR), vol. 48, no. 2, p. 28, 2015.
  • [10] A. Jøsang, R. Ismail, and C. Boyd, “A survey of trust and reputation systems for online service provision,” Decision support systems, vol. 43, no. 2, pp. 618–644, 2007.
  • [11] J. Al Qundus, “Technical analysis of the social media platform genius,” Freie Universität Berlin, Tech. Rep., 2018. [Online]. Available:
  • [12] M. J. Metzger, A. J. Flanagin, K. Eyal, D. R. Lemus, and R. M. McCann, “Credibility for the 21st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment,” Annals of the International Communication Association, vol. 27, no. 1, pp. 293–335, 2003.
  • [13] I. Pranata and W. Susilo, “Are the most popular users always trustworthy? the case of yelp,” Electronic Commerce Research and Applications, vol. 20, pp. 30–41, 2016.
  • [14] M. Warncke-Wang, D. Cosley, and J. Riedl, “Tell me more: an actionable quality model for wikipedia,” in Proceedings of the 9th International Symposium on Open Collaboration.   ACM, 2013, p. 8.
  • [15] B. Fogg, J. Marshall, O. Laraki, A. Osipovich, C. Varma, N. Fang, J. Paul, A. Rangnekar, J. Shon, P. Swani et al., “What makes web sites credible?: a report on a large quantitative study,” in Proceedings of the SIGCHI conference on Human factors in computing systems.   ACM, 2001, pp. 61–68.
  • [16] R. Castro, “The empirical distribution function and the histogram,” Lecture Notes, 2WS17-Advanced Statistics. Department of Mathematics, Eindhoven University of Technology, 2015.
  • [17] J. J. Louviere and G. Woodworth, “Design and analysis of simulated consumer choice or allocation experiments: an approach based on aggregate data,” Journal of marketing research, pp. 350–367, 1983.
  • [18] J. A. Nelder and R. Mead, “A simplex method for function minimization,” The computer journal, vol. 7, no. 4, pp. 308–313, 1965.
  • [19] R. M. Johnson and B. K. Orme, “How many questions should you ask in choice-based conjoint studies,” in Art Forum, Beaver Creek, 1996.