MAI-BIAS toolkit for fairness analysis

Overview of the MAI-BIAS toolkit.

MAI-BIAS is a toolkit for AI fairness analysis that was first created by the MAMMOth project. It aims to cover the needs of both computer scientists and auditors by bringing together existing and novel software solutions for AI bias detection and mitigation in a uniform interface.

The toolkit can be installed either as a local runner -for fast bootstrapping in one machine- or as a tool for remote access through your organization’s infrastructure. Here is an introduction to the local runner counterpart, which is maintained by our team but also includes 40+ intra-organization module contributions. Those modules span various kinds of fairness analysis, trustworthiness analysis, recommendations for bias mitigation techniques, and dataset and model loading. New module contributions are always welcome.

About

MAI-BIAS offers a simple pipeline for fairness analysis. In particular, the user selects a dataset loader, a model loader, and an analysis methodology. Configuration parameters customize the loading processes and analysis, for example, to designate paths to datasets or saved model parameters. See the steps below. Several messages capture non-technical considerations about fairness and bias in AI under the non-technical but equally important viewpoints of social sciences and vulnerable communities. Figure by Emmanouil Krasanakis.

MAI-BIAS demo

New runs can be duplicated to create variations of dataset and models. For example, you can test a different model on the same dataset, the same model on a different dataset, or try new kinds of analysis. A catalogue of all available modules can be found here.

Each type of analysis is unique in that it checks for different considerations and provides different insights for experts. Not all analysis methods -and even not all results in each one- are relevant in each application context, because there will always be some bias and the question is which are considered discriminatory or harmful. In general, deciding on unfair biases should be part of an interdisciplinary and multi-stakeholder negotiation process, for which the toolkit equips you by providing various kinds of analysis to run.

Quickstart

Make sure you are on Python 3.11. Most modules also work in Python 3.13, with the exception of text debiasing. Then, install the mai-bias package, and launch its desktop app. Below is an example of how to install the toolkit.

# may need to replace python with python3
python --version
pip install mai-bias
python -m mai_bias.app

Try this bootstrapping command to install the local runner in an empty working directory:

curl -fsSL https://raw.githubusercontent.com/mammoth-eu/mammoth-commons/dev/mai_bias.sh -o mai_bias.sh && chmod +x mai_bias.sh && ./mai_bias.sh

After installation, you can again run the mai_bias.sh script, or manual activate your virtual environment and run python -m mai_bias.app afterwards. An interactive command line interface is also available by running python -m mai_bias.cli in the terminal. The toolkit creates several helper files that store run outcomes and cache data to minimize internet usage.

Creative Commons License

The content of this post is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).

Krasanakis Emmanouil
Krasanakis Emmanouil
Post-doctoral researcher

My main research interests lie in graph theory and graph neural networks, machine learning with focus on algorithmic fairness and discrimination, and software engineering.