Skip to content

User Guide

This guide covers the main workflows for using the Counterfactuals library.

Overview

The library provides a complete pipeline for generating and evaluating counterfactual explanations:

flowchart LR
    A[Load Dataset] --> B[Train Models]
    B --> C[Generate CFs]
    C --> D[Evaluate]
    D --> E[Analyze Results]

Sections

Section Description
Working with Datasets Load and configure datasets for counterfactual generation
Training Models Train discriminative and generative models
Generating Counterfactuals Use various methods to generate explanations
Evaluating Results Assess counterfactual quality with metrics
Running Pipelines Execute end-to-end experiments with Hydra

Quick Reference

Basic Workflow

from counterfactuals.datasets import FileDataset
from counterfactuals.models.classifiers import MLPClassifier
from counterfactuals.models.generators import MaskedAutoregressiveFlow
from counterfactuals.cf_methods.local_methods import PPCEF

# 1. Load dataset
dataset = FileDataset(config_path="config/datasets/adult.yaml")

# 2. Train models
classifier = MLPClassifier(...)
gen_model = MaskedAutoregressiveFlow(...)

# 3. Generate counterfactuals
method = PPCEF(gen_model, classifier, ...)
result = method.explain(X, y_origin, y_target, ...)

# 4. Evaluate
from counterfactuals.metrics import MetricsOrchestrator
metrics = MetricsOrchestrator(...)
scores = metrics.compute(result)