Counterfactual Fairness - GSoC 2021

The aim of this project is to add a framework for counterfactual fairness in machine learning models into the Julia codebase. The link to the Github repository, where the work will be updated, is here. The link to the project description is here.

Why do we need this project?

Machine learning models are being used to assess loan and job applications, in bail, sentencing and parole decisions, and in an increasing number of impactful decisions.  Unfortunately, due to both bias in the training data and training methods, machine learning models can unfairly discriminate against individuals or groups. While there are many statistical methods to alleviate unfairness, there’s a growing awareness that any account of fairness must take causality into account.

Counterfactual Fairness

Counterfactual fairness is a causality-based tool to mitigate bias in machine learning tools. Counterfactuals constitute the third layer of Pearl's Causal Ladder, the first two layers of which are - Association (seeing) and Interventions (doing).

Association is simply describing the observational data through joint and conditional probability distributions. Interventions include fixing the values of particular variables and describing how it influences the causal model. Counterfactuals describe the data in retrospection. It describes questions of the form "What happens if I had chosen B, given that I chose A?".

Counterfactual fairness is a definition of fairness that considers a model fair if it predicts the same outcome for a particular individual or group in the real world and in the counterfactual world where they belong to a different demographic.

Currently, the Google What-if tool is the only open-source fairness toolkit that implements counterfactual fairness, but it is a web-based application wherein the dataset must be uploaded.

Julia Implementation

Fairness.jl is a bias mitigating toolkit in Julia that currently includes various algorithms to evaluate fairness of a model and improve it.

An example of how to use an algorithm currently implemented in Fairness.jl is shown below:

using Pkg
Pkg.add("Fairness")
Pkg.add("MLJ")
Pkg.add("PrettyPrinting")
24.7s
env (Julia)
using MLJ
178.1s
env (Julia)
using Fairness
179.6s
env (Julia)
X, y = @load_adult
model = ConstantClassifier()
wrapped_model = ReweighingSamplingWrapper(classifier = model, grp = :sex)
0.2s
env (Julia)
using PrettyPrinting
evaluate(wrapped_model, X, y, measures = MetricWrappers([true_positive, true_positive_rate], grp = :sex)) |> pprint
4.0s
env (Julia)

CausalInference.jl is a Julia package that allows the user to construct a causal graph from observational data and Omega.jl allows for causal reasoning (computing interventions and counterfactuals) given a causal model.

Tasks to do during JSoC

  • Implement interventions and counterfactuals on models in CausalInference.jl

  • Implement the definition for counterfactual fairness in CausalInference.jl and Omega.jl

  • Implement methods to fix a model to be fair using causal reasoning

  • Integrate methods for causal inference and counterfactual fairness in Fairness.jl and MLJ.jl in a similar framework as above.

References

Runtimes (1)