Welcome to CARLA’s documentation!

CARLA is a python library to benchmark counterfactual explanation and recourse models. It comes out-of-the box with commonly used datasets and various machine learning models. Designed with extensibility in mind: Easily include your own counterfactual methods, new machine learning models or other datasets.

Available Datasets

Implemented Counterfactual Methods

  • Actionable Recourse (AR): AR

  • CCHVAE: CCHVAE

  • Contrastive Explanations Method (CEM): CEM

  • Counterfactual Latent Uncertainty Explanations (CLUE): CLUE

  • CRUDS: CRUDS

  • Diverse Counterfactual Explanations (DiCE): DiCE

  • Feasible and Actionable Counterfactual Explanations (FACE): FACE

  • Growing Sphere (GS): GS

  • Revise: Revise

  • Wachter: Wachter

Provided Machine Learning Models

  • ANN: Artificial Neural Network with 2 hidden layers and ReLU activation function

  • LR: Linear Model with no hidden layer and no activation function

Which Recourse Methods work with which ML framework?

The framework a counterfactual method currently works with is dependent on its underlying implementation. It is planned to make all recourse methods available for all ML frameworks. The latest state can be found here:

Recourse Method

Tensorflow

Pytorch

AR

X

X

CCHVAE

X

CEM

X

CLUE

X

CRUDS

X

DiCE

X

X

FACE

X

X

Growing Spheres

X

X

Revise

X

Wachter

X

Citation

This project was recently accepted to NeurIPS 2021 (Benchmark & Data Sets Track).

If you use this codebase, please cite:

@misc{pawelczyk2021carla,
      title={CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms},
      author={Martin Pawelczyk and Sascha Bielawski and Johannes van den Heuvel and Tobias Richter and Gjergji Kasneci},
      year={2021},
      eprint={2108.00783},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Indices and tables