As AI systems become more ingrained in everyday life, it is important to address fairness concerns. For example, machine learning models are known to pick up and exacerbate biases found in their training data. So far, various attempts have been made to quantify and mitigate unfair AI biases, for example with algorithmic frameworks like AIF360. However, these attempts tend to work on a case-by-case basis, with tackled measures of bias or fairness being restricted to simplistic settings, such as binary sensitive attributes (e.g., men vs women, whites vs blacks). For this reason, in the context of the Horizon Europe MAMMOth project we developed a Python library called FairBench to help system creators assess bias and fairness in complex scenarios.