We are developing technologies and services for understanding, searching and verifying media content
We develop methods and tools for bringing Trustworthy AI and advanced media analytics into new application settings and contexts.
We provide tools for image forensics, Exif metadata analysis, synthetic image detection, visual location estimation and video deepfake detection.
We have solutions for detecting Not Safe For Work (NSFW) and disturbing images and videos.
We provide methods and services for reverse video search using audio-visual similarity on large collections of videos.
We have integrated a number of advanced computer vision and media retrieval methods into a complete web application that can serve diverse media asset management needs.
We offer methods and expertise on measuring and addressing bias and discriminatory behaviour in computer vision models.
We offer tools and expertise on analysis and visualization of online social media connections, conversations and communities.
We offer support for integrating cutting edge ΑΙ models into web services and end user applications.
We have a long successful track record of research and innovation project coordination, and can provide consulting and research project management services.
The Media Verification team has extensive experience and expertise in the area of online disinformation with an emphasis on multimedia-mediated disinformation.
ViSiL
This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retrieval with Deep Metric Learning.
Created by
Giorgos Kordopatis-Zilos
Image Forensics
This is an integrated framework for image forensic analysis.
Created by
Markos Zampoglou
Computational Verification
A framework for “learning” how to classify social content as truthful/reliable or not. Features are extracted from the tweet text (Tweet-based features TF) and the user who published it (User-based features UB). A two level classification model is trained.
Created by
Olga Papadopoulou
Multimedia Geotagging
This repository contains the implementation of algorithms that estimate the geographic location of multimedia items based on their textual content. The approach is described in the paper Geotagging Text Content With Language Models and Feature Mining.
Created by
Giorgos Kordopatis-Zilos
Intermediate CNN Features
This repository contains the implementation of the feature extraction process described in Near-Duplicate Video Retrieval by Aggregating Intermediate CNN Layers. Given an input video, one frame per second is sampled and its visual descriptor is extracted from the activations of the intermediate convolution layers of a pre-trained Convolutional Neural Network. Then, the Maximum Activation of Convolutions (MAC) function is applied on the activation of each layer to generate a compact layer vector. Finally, the layer vector are concatenated to generate a single frame descriptor.
Created by
Giorgos Kordopatis-Zilos
Near-Duplicate Video Retrieval with Deep Metric Learning
This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retrieval with Deep Metric Learning. It provides code for training and evalutation of a Deep Metric Learning (DML) network on the problem of Near-Duplicate Video Retrieval (NDVR). During training, the DML network is fed with video triplets, generated by a triplet generator. The network is trained based on the triplet loss function.
Created by
Giorgos Kordopatis-Zilos
MedDMO addresses the need of the European Digital Media Observatory (EDMO) to expand its regional coverage in EU countries and create a multinational, multilingual, and cross-sectoral hub focused on fact-checking, research, and education to counter disinformation in Malta, Greece and Cyprus. Several multimedia analysis tools are made available to assist fact-checkers and researchers in their work against disinformation. Media literacy activities and the organisation of awareness campaigns will be central to MedDMO, trying to build resilience and adaptability against disinformation among citizens and media in the Mediterranean region.
Dec 2022 – May 2025
Read moreMAMMOth aims at developing an innovative fairness-aware AI-data driven foundation that provides the necessary tools and techniques for the discovery and mitigation of multi-discrimination and ensures the accountability of AI-systems with respect to multiple protected attributes and for traditional tabular data and more complex network and visual data. The outcomes of research in MAMMOth will be made available both as standalone open-source components and integrated into an open source toolkit “MAMMOth toolkit”. The project also comprises active interaction with multiple communities of vulnerable and/or underrepresented groups in AI research, implementing a co-creation strategy to ensure that genuine user needs and pains are at the center of the research agenda.
Nov 2022 - Oct 2025
Read morevera.ai seeks to build trustworthy AI solutions against advanced disinformation techniques, co-created with and for media professionals and set the foundation for future research in the area of AI against disinformation. Key novel characteristics of the AI models will be fairness, transparency (including explainability), robustness to new data, and continuous adaptation to new disinformation techniques.
Sep 2022 - Aug 2025
Read moreAI4Media aims to address the challenges, risks, and opportunities that the wide use of AI brings to media, society, and politics. The project aspires to become a centre of excellence and a wide network of researchers across Europe and beyond, with a focus on delivering the next generation of core AI advances to serve the key sector of Media.
Sep 2020 - Aug 2024
Read moreIn this post we present the essential parts of our method RINE, described in the paper Leveraging Representations from Intermediate Encoder-Blocks for Synthetic Image Detection, which has been accepted by the European Conference on Computer Vision (ECCV 2024). Motivation Recent research on Synthetic Image Detection (SID) has led to strong evidence on the advantages of representations extracted by foundation models, exhibiting exceptional generalization capabilities on GAN and Diffusion model generated data (Ojha et al. 2023). Motivated by this success, we hypothesize that further performance gains are possible by leveraging representations from intermediate layers, which carry low-level visual information, in addition to representations from the final layer that primarily carry high-level semantic information.
In this post, we explain the basics behind our paper Credible, Unreliable or Leaked?: Evidence verification for enhanced automated fact-checking by Zacharias Chrysidis, Stefanos-Iordanis Papadopoulos, Symeon Papadopoulos and Panagiotis C. Petrantonakis, which has been presented at ICMR’s 3rd ACM International Workshop on Multimedia AI against Disinformation. Automated Fact-Checking (AFC) The Information Age, especially after the explosion of online platforms and social media, has led to a surge in new forms of mis- and disinformation, making it increasingly difficult for people to trust what they see and read online. To combat this, many fact-checking organizations, including Snopes, PolitiFact as well as Reuters and AFP fact-checks, have emerged, dedicated to verifying claims in news articles and social media posts. Nevertheless, the manual process of fact-checking is time-consuming and can’t always keep pace with the rapid spread of mis- and disinformation. This is where the field of Automated Fact-Checking (AFC) comes in. In recent years, researchers have been trying to leverage advances in deep learning, large language models, computer vision, and multimodal learning to develop tools to assist the work of professional fact-checkers. AFC systems aim to automate key parts of the fact-checking process, such as detecting check-worthy claims, retrieving relevant evidence from the web, and cross-examining them against the claim (Guo, et al., 2022). Since fact-checking rarely relies solely on examining internal contradictions in claims, AFC systems often require the retrieval of external information from the web, knowledge databases, or performing inverse image searches to support or refute a claim, as shown below.
The problem Generative AI is making it easier to create incredibly realistic synthetic images. Detecting these fake images is challenging, and several methods have been developed to tackle this issue. However, there’s often a big difference between how well these methods work in experiments and how they perform in real life.