Papers

Filtering external evidence for realistic training and evaluation of Automated Fact-Checking

In this post, we explain the basics behind our paper Credible, Unreliable or Leaked?: Evidence verification for enhanced automated fact-checking by Zacharias Chrysidis, Stefanos-Iordanis Papadopoulos, Symeon Papadopoulos and Panagiotis C. Petrantonakis, which has been presented at ICMR’s 3rd ACM International Workshop on Multimedia AI against Disinformation. Automated Fact-Checking (AFC) The Information Age, especially after the explosion of online platforms and social media, has led to a surge in new forms of mis- and disinformation, making it increasingly difficult for people to trust what they see and read online. To combat this, many fact-checking organizations, including Snopes, PolitiFact as well as Reuters and AFP fact-checks, have emerged, dedicated to verifying claims in news articles and social media posts. Nevertheless, the manual process of fact-checking is time-consuming and can’t always keep pace with the rapid spread of mis- and disinformation. This is where the field of Automated Fact-Checking (AFC) comes in. In recent years, researchers have been trying to leverage advances in deep learning, large language models, computer vision, and multimodal learning to develop tools to assist the work of professional fact-checkers. AFC systems aim to automate key parts of the fact-checking process, such as detecting check-worthy claims, retrieving relevant evidence from the web, and cross-examining them against the claim (Guo, et al., 2022). Since fact-checking rarely relies solely on examining internal contradictions in claims, AFC systems often require the retrieval of external information from the web, knowledge databases, or performing inverse image searches to support or refute a claim, as shown below.

Overcoming Unimodal Bias in Multimodal Misinformation Detection

In this post, we explain the basics behind our paper “VERITE: a robust benchmark for multimodal misinformation detection accounting for unimodal bias”, by Stefanos-Iordanis Papadopoulos, Christos Koutlis, Symeon Papadopoulos and Panagiotis C. Petrantonakis, which has been published in the International Journal of Multimedia Information Retrieval (IJMIR). Given the rampant spread of misinformation, the role of fact-checkers becomes increasingly important and their task more challenging given the sheer volume of content generated and shared daily across social media platforms. Studies reveal that multimedia content, with its visual appeal, tends to capture attention more effectively and enjoys broader dissemination compared to plain text (Li & Xie, 2019) and the inclusion of images can notably amplify the persuasiveness of false statements (Newman et al., 2012). It is for that reason that Multimodal Misinformation (MM) is very concerning. MM involves false or misleading information disseminated through diverse “modalities” of communication, including text, images, audio, and video. Scenarios often unfold where an image is removed from its authentic context, or its accompanying caption distorts critical elements, such as the event, date, location, or the identity of the depicted person. For instance, in the above figure (a) we observe an image where the grounds of a musical festival are covered in garbage, accompanied by the claim that it was taken in June 2022 “after Greta Thunberg’s environmentalist speech”. However, the image was removed from its authentic context, since it was actually taken in 2015, not 2022.