;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;unicode-bidi:plaintext”>
we would like to announce the release of FairBench, a comprehensive Python library for exploring and understanding AI biases and fairness.
Why an(other) AI fairness library?
As AI systems become ingrained in everyday lives, it is important to ensure their fairness across different demographic groups — or group intersections.
But it can be challenging to navigate through the many algorithmic bias/fairness definitions and metrics in a standardized way.
Introducing FairBench: a partner in fair AI development
FairBench provides a robust and flexible platform for in-depth AI fairness exploration, for example, as part of your AI fairness compliance plan.
It composes fairness definitions from a growing list of simpler building blocks, and lets you automatically or manually combine and run those in a couple lines of code.
It composes fairness definitions from a growing list of simpler building blocks, and lets you automatically or manually combine and run those in a couple lines of code.
Take advantage of features designed to help grasp a broad picture of systems:
🧱 Measures are built from simpler blocks through a standard scheme that makes it easier to understand what each one represents.
📈 Generate fairness reports and stamps for classification, recommendation, ranking, or scoring tasks. Reports contain descriptions of their values, can be saved, and can be compared to track progress as datasets evolve or between different algorithm versions.
⚖️ Perform analysis across multiple multi-value sensitive attributes and their intersections. Visualize the results in the console or in the browser, or export them in various formats (HTML text, json, etc) for integration in your pipelines.
🧪 Filter reports (simplify/transform/extract ad-hoc summaries) to get insights about where to start your investigation, and backtrack to the intermediate computations of worrisome values to get a feel for algorithmic issues at play.
🖥️ ML compatible: can handle lists, arrays, dataframes, and tensors from popular frameworks. These could come from any modality you are working in (tabular data, images, graphs, text, etc.)
📦 Comes together with exploratory datasets and algorithms, currently from the tabular and vision data modalities, for out-of-the-box experimentation.
We invite you to explore FairBench
Documentation with the ability to try it in your browser: https://fairbench.readthedocs.io/
Our repository: https://github.com/mever-team/FairBench
If you are interested in more direct discussion or talking about the topic of AI fairness, join us in our Discord server: https://discord.gg/WwQWFSjSWZ
We are eager to hear your feedback and receive feature requests and bug reports via Discord, GitHub issues, or email. We welcome pull requests, and encourage you to join our community in advancing the field of AI fairness.
Ready to assess AI fairness?
Sincerely,
Emmanouil Krasanakis
Emmanouil Krasanakis




April 30th, 2025
Daniela Lopez de Luise
Posted in