UofA-Truth at Factify 2022 : Transformer And Transfer Learning Based Multi-Modal Fact-Checking

01/28/2022
by   Abhishek Dhankar, et al.
0

Identifying fake news is a very difficult task, especially when considering the multiple modes of conveying information through text, image, video and/or audio. We attempted to tackle the problem of automated misinformation/disinformation detection in multi-modal news sources (including text and images) through our simple, yet effective, approach in the FACTIFY shared task at De-Factify@AAAI2022. Our model produced an F1-weighted score of 74.807 will explain our approach to undertake the shared task.

READ FULL TEXT
research
06/17/2022

Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection

The proliferation of fake news and its serious negative social influence...
research
07/02/2023

Fraunhofer SIT at CheckThat! 2023: Mixing Single-Modal Classifiers to Estimate the Check-Worthiness of Multi-Modal Tweets

The option of sharing images, videos and audio files on social media ope...
research
05/06/2022

Fake News Detection with Heterogeneous Transformer

The dissemination of fake news on social networks has drawn public need ...
research
08/28/2023

Helping Fact-Checkers Identify Fake News Stories Shared through Images on WhatsApp

WhatsApp has introduced a novel avenue for smartphone users to engage wi...
research
10/13/2020

A Multi-Modal Method for Satire Detection using Textual and Visual Cues

Satire is a form of humorous critique, but it is sometimes misinterprete...
research
12/16/2021

Logically at Factify 2022: Multimodal Fact Verification

This paper describes our participant system for the multi-modal fact ver...
research
04/08/2019

Text-based Depression Detection: What Triggers An Alert

Recent advances in automatic depression detection mostly derive from mod...

Please sign up or login with your details

Forgot password? Click here to reset