Skip to main content

Fairness of predictive models: an application to insurance markets

Annual Research Report

Fairness of predictive models

This project, led by Professor Arthur Charpentier from the University of Quebec in Montreal, runs from 2023 to 2026. It focuses on addressing biases in automated artificial intelligence algorithms used to determine optimal pricing in individual insurance policies. The goal is to mitigate or eliminate these biases, which could result in inequities or discriminatory practices based on factors like gender, race, religion, or origin in the coverage offered by insurers or reinsurers to policyholders. More broadly, it tackles challenges related to the personalization of insurance pricing in the context of opaque models, considering the implications for insurance companies, the global insurance market, and issues of welfare and fairness.

In this annual report, the researchers present their findings from the first year, organized into five key areas:

  • Mitigating Discrimination: Developed algorithms with theoretical guarantees to address discrimination from a distributional perspective (group fairness) using the Wasserstein Distance and barycenters in a post-processing approach. These algorithms also consider scenarios with multiple sensitive attributes, accompanied by a Python package.

  • Counterfactual Fairness: Created algorithms to assess counterfactual fairness in probabilistic graphical models using optimal transport and a sequential approach. These tools help address individual fairness by answering questions like, "Would this individual have received a different prediction if they were not Black?"

  • Model Calibration: Provided statistical tools to explore the calibration of predictive models, incorporating both computational and philosophical perspectives on the interpretation of predictive scores, including the use of ensemble methods.

  • Algorithmic Collusion: Analyzed theoretical competitive markets to understand potential algorithmic collusion through reinforcement learning techniques, particularly focusing on the problem of "collusion without discussions."

  • Imbalanced Regression: Investigated the challenges of imbalanced regression and examined generative processes to rebalance rare observations, considering the impact of possible noise.


Reference:
Charpentier, A. (2024). Insurance, Biases, Discrimination, and Fairness. Springer. ISBN: 978-3-031-49782-7. 
 

To read more about the project