AI Fairness Definition Guide

Information transfer between different actors in our proposed workflow for defining fairness. Figure by Emmanouil Krasanakis

In the context of the Horizon Europe MAMMOth project, we developed the “AI Fairness Definition Guide” to help those creating AI (such as researchers, developers, and product owners) understand how to define fairness in the social context of their created systems by working with stakeholders and experts from other disciplines. The guide presents a workflow for gathering fairness concerns of affected stakeholders and using them to derive corresponding formalisms and practices under a combined computer, social, and legal science viewpoint.

Overview of interactions between different actors for defining what constitutes fair AI in an examined context. Figure by Emmanouil Krasanakis

Creative Commons License

The content of this post is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).

Krasanakis Emmanouil
Krasanakis Emmanouil
Post-doctoral researcher

My main research interests lie in graph theory and graph neural networks, machine learning with focus on algorithmic fairness and discrimination, and software engineering.

Rizou Stavroula
Rizou Stavroula
Postdoctoral Researcher-Project Manager-Data Protection Specialist

Her research interests focus mainly on the protection of personal data, cross-border data protection, the General Data Protection Regulation (GDPR), and the interaction of personal data protection with innovative technologies in the field of Information Technology (e.g. artificial intelligence, IoT networks, smart homes)