Logically at Factify 2: A Multi-Modal Fact Checking System Based on Evidence Retrieval techniques and Transformer Encoder Architecture

01/09/2023
by   Pim Jordi Verschuuren, et al.
0

In this paper, we present the Logically submissions to De-Factify 2 challenge (DE-FACTIFY 2023) on the task 1 of Multi-Modal Fact Checking. We describes our submissions to this challenge including explored evidence retrieval and selection techniques, pre-trained cross-modal and unimodal models, and a cross-modal veracity model based on the well established Transformer Encoder (TE) architecture which is heavily relies on the concept of self-attention. Exploratory analysis is also conducted on this Factify 2 data set that uncovers the salient multi-modal patterns and hypothesis motivating the architecture proposed in this work. A series of preliminary experiments were done to investigate and benchmarking different pre-trained embedding models, evidence retrieval settings and thresholds. The final system, a standard two-stage evidence based veracity detection system, yields weighted avg. 0.79 on both val set and final blind test set on the task 1, which achieves 3rd place with a small margin to the top performing system on the leaderboard among 9 participants.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2021

Logically at Factify 2022: Multimodal Fact Verification

This paper describes our participant system for the multi-modal fact ver...
research
05/10/2022

Transformer-based Cross-Modal Recipe Embeddings with Large Batch Training

In this paper, we present a cross-modal recipe retrieval framework, Tran...
research
07/21/2020

Multi-modal Transformer for Video Retrieval

The task of retrieving video content relevant to natural language querie...
research
11/30/2021

Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context Images via Online Resources

Misinformation is now a major problem due to its potential high risks to...
research
12/02/2020

Cross-modal Retrieval and Synthesis (X-MRS): Closing the modality gap in shared subspace

Computational food analysis (CFA), a broad set of methods that attempt t...
research
12/29/2022

BagFormer: Better Cross-Modal Retrieval via bag-wise interaction

In the field of cross-modal retrieval, single encoder models tend to per...
research
06/24/2021

A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 2021

In this paper, inspired by the successes of visionlanguage pre-trained m...

Please sign up or login with your details

Forgot password? Click here to reset