The Coded Fairness Toolkit

Machine learning systems help people make decisions in a rapidly increasing number of areas, some of which may have major impact on people's lives. This is why it is of essence that these decisions are always made fairly and completely free of harmful biases.

How it works

This toolkit contains an ordered collection of methods that aim to enable a bias-sensitive development of machine learning systems. It is primarily targeted at development teams of machine learning systems that are still in the

conceptional phase of their product's development. The toolkit is based on a modular structure which allows better planning of the included methods' implementation.

Module 1

Bias & Impact Awareness

In this module, you will find methods that playfully promote an examination of one's own biases. These methods are complemented by a risk-benefit analysis. Additionally, common ethical goals will be discussed and formulated within the team.

Module 2

Diversity & Inclusion

This modulde introduces methods to help you build a (loose) team extension, a "team 2.0". The aim is to help you include perspectives that cannot be adequately covered by your existing team.

Module 3

Scenario Building & Testing

The final module's methods will help you map out future scenarios, which can be used to examine your product's biases. In addition, optional methods are provided to check for technical biases.

Download the toolkit

The Coded Fairness Toolkit can be carried out analogously or digitally. For the analogue version, a booklet and posters can be downloaded. For the digital version, we offer the download of a miro board.