SCOR Foundation Workshop | Confidence and Fairness: Scientific Foundations in AI and Risk
In connection with the funded project “Fairness of predictive models: an application to insurance markets,” - May 15, 2025
In connection with the funded project “Fairness of predictive models: an application to insurance markets," the SCOR Foundation hosted a workshop on May 15, 2025, titled “Confidence and Fairness: Scientific Foundations in AI and Risk.”
As artificial intelligence becomes increasingly integrated into decision-making systems, particularly in high-stakes domains like insurance, finance, and healthcare, questions surrounding fairness, accountability, and transparency have become critical.
This workshop brought together some of the most prominent and forward-thinking researchers and practitioners in the field to address the technical, ethical, and societal dimensions of algorithmic decision-making, with a strong focus on detecting, understanding, and mitigating discrimination embedded within predictive models.
After a thoughtful opening by Philippe Trainar (Director of the SCOR Foundation for Science), Arthur Charpentier (Université du Québec à Montréal) delivered a foundational and conceptually rich presentation showcasing the Montreal team’s impressive 18-month research project on fairness in insurance modeling. His talk laid the groundwork for the day, introducing key concepts such as counterfactual fairness, model calibration, proxy discrimination, and Wasserstein barycenters, all while bridging deep theoretical insights with pressing actuarial applications.
Toon Calders (Universiteit Antwerpen, Belgium) followed with a compelling and thought-provoking analysis of the subtleties involved in identifying and correcting algorithmic bias. His presentation "Unfair, You Say? Explain Yourself!" stood out for its clarity and critical depth, highlighting the risks of superficial fairness metrics and powerfully advocating for interpretability and counterfactual explanations as key tools in developing genuinely fair AI systems.
Isabel Valera (Universität des Saarlandes, Germany) delivered an intellectually vibrant and forward-looking keynote advocating for a society-centered approach to AI. Her presentation "Society-centered AI: An Integrative Perspective on Algorithmic Fairness" offered a nuanced critique of current fairness paradigms and calling for inclusive design practices that align AI development with collective social values.
Jean Michel Loubes (Institut de Mathématiques de Toulouse, Toulouse School of Economics, INRIA, France) offered a rigorously analytical and original exploration into the roots of algorithmic bias. His talk "Beyond fairness measures, discovering the bias in the algorithm" was particularly notable for its methodological precision and relevance, providing a novel framework to understand how learning processes can exacerbate discrimination and lead to “fairwashing” — a critical step toward developing bias-resilient models.
Evgeny Chzhen (CNRS, France) presented one of the most technically sophisticated interventions of the day, "An optimization approach to post-processing for classification with system constraints," introducing a powerful post-processing algorithm that satisfies fairness constraints across diverse classification settings. His approach, grounded in advanced optimization techniques and requiring only unlabeled data, stood out for its elegance, versatility, and strong practical applicability.
Michele Loi (AlgorithmWatch and Università degli Studi di Milano, Italy) delivered an insightful philosophical reflection on how diagnostic information can inform fairer algorithmic decision-making. His talk "From Facts to Fairness: Diagnostic Models in Algorithmic Decision-Making" was a standout for its interdisciplinary depth, demonstrating how causal reasoning can help ethically integrate protected characteristics into AI models without reinforcing discrimination.
Aurélie Lemmens (Erasmus University, Netherlands) introduced a framework — Fair Active Learning (FAL) — to proactively address bias during the data acquisition process. Her talk "Fair Active Learning for Personalized Policies" demonstrated through simulations and experiments how FAL and the complementary BEAT method together could optimize fairness across multiple dimensions without sacrificing performance.
Finally, François Hu (Milliman R&D, France) and Antoine Ly (SCOR, France) concluded the day with an experience-rich discussion from the practitioner's perspective. Their presentation "Fairness and Confidence in Insurance Markets, a Practitioners Perspective" effectively bridged the gap between theory and application, highlighting the real-world constraints of fairness implementation and emphasizing the importance of sustained dialogue between scientists, regulators, and industry stakeholders.
Throughout the day, just under 100 participants had the opportunity to engage directly with the speakers and one another in a spirit of open, critical dialogue. The richness of the debates shed new light on the intricacies of algorithmic fairness, offering valuable perspectives to a diverse audience — students, insurers, reinsurers, regulators, academics, statisticians, and actuaries alike — on a topic where even the best intentions can lead to unforeseen consequences.
More information about the project
Videos