Skip to main content

Fairness of predictive models: an application to insurance markets

Recent events: Presentation at ACP, KU Leuven and workshop at IME in Chicago - July 2024

Fairness of predictive models

Led by Professor Arthur Charpentier from the University of Quebec in Montreal, this project focuses on addressing biases in automated artificial intelligence algorithms used to determine optimal pricing in individual insurance policies. The goal is to mitigate or eliminate these biases, which could result in inequities or discriminatory practices based on factors like gender, race, religion, or origin in the coverage offered by insurers or reinsurers to policyholders.

In June and July 2024, Professor Charpentier spoke at two events linked to the project, giving a presentation in Belgium and leading a workshop in the USA.

It is usually claimed that actuaries build 'predictive models' but most of the time, what they consider would be simply 'contemplative modeling', in the sense that they use past information and hope that the future will be more or less the same (corresponding to the idea of generalization in machine learning). In the context of climate change (but also when modeling insurance market competition) it is not the case, data used to train models do not have the same distribution as the one we will have in the future.  

In this course, the researchers will get back to mathematical properties of risk sharing on networks, with reciprocal contracts. They will discuss conditions about stochastic dominance, proving that policyholders might have interest in sharing risks with “friends”. Then, they will try to address fairness issues, for such risk sharing mechanisms. If fairness has been recently intensively studied, either through group or individual fairness, there are yet not much literature about fairness on networks. It is important to address those issues since perceived discrimination is usually associated with networks. We will see why the topology of the network is important, both to design peer-to-peer schemes to share risks, but also to see if perceived discrimination is associated with global disparate treatment.

To read more about the project

Titre du bloc
Download
Visuel
Presentation at the ACP conferences July 2024-CoverPhoto