A personal model of trumpery: Deception detection in a real-world high-stakes setting

11/05/2018
by   Sophie van der Zee, et al.
0

Language use reveals information about who we are and how we feel1-3. One of the pioneers in text analysis, Walter Weintraub, manually counted which types of words people used in medical interviews and showed that the frequency of first-person singular pronouns (i.e., I, me, my) was a reliable indicator of depression, with depressed people using I more often than people who are not depressed4. Several studies have demonstrated that language use also differs between truthful and deceptive statements5-7, but not all differences are consistent across people and contexts, making prediction difficult8. Here we show how well linguistic deception detection performs at the individual level by developing a model tailored to a single individual: the current US president. Using tweets fact-checked by an independent third party (Washington Post), we found substantial linguistic differences between factually correct and incorrect tweets and developed a quantitative model based on these differences. Next, we predicted whether out-of-sample tweets were either factually correct or incorrect and achieved a 73 demonstrate the power of linguistic analysis in real-world deception research when applied at the individual level and provide evidence that factually incorrect tweets are not random mistakes of the sender.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset