Track A: Responsible AI and Machine Ethics

Deep Ethics – Keynote
Maximilian Kiener
Speaker bio
AI Act for the working AI engineers – Keynote
Holger Hermanns
Speaker bio
Automating Moral Reasoning
Marija Slavkovik
Speaker bio
Searching for light in the trust jungle? Understanding Human-AI trust through the lens of the Trustworthiness Assessment Model (TrAM)
Nadine Schlicker
Speaker bio
AI Ethics: Main concepts, achieved results and open challenges
Serena Villata
Speaker bio
Introduction to Explainable AI—From Faithful to Human-Friendly Explanations
Elisa Fromont
Speaker bioFrom 2008 until 2017, I was associate professor at Université Jean Monnet in Saint-Etienne, France. I worked at the Hubert Curien research institute in the Data Intelligence team. I received my Research Habilitation (HDR) in 2015 from the University of Saint-Etienne.
From 2006 until 2008 I was a postdoctoral researcher in the Machine Learning group of the KU Leuven, Belgium.
I received my PhD in 2005 from Université de Rennes 1.
My primary research focus lies in developing machine learning algorithms tailored for temporal data or scenarios where time plays a crucial role in the machine learning process. To achieve this, I strive to create models that are not only effective but also trustworthy by being explainable, ensuring user privacy, promoting fairness, minimizing computational resources, and guaranteeing robustness.

Watermarking of LLMs
Eva Giboulot
Speaker bioShe currently works on the security of AI systems, with an emphasis on the problem of detecting generated content using watermarking and forensics methods. More generally, she deals with settings involving the design and detection of weak signals: adversarial examples, backdoor injection and detection, model security… She is part of the IEEE Information Forensics and Security Technical Committee as an expert in generalization problems in steganography and steganalysis.

Introduction to Fairness in Classification and its Interactions with Differential Privacy
Michaël Perrot
Speaker bio
A mathematical framework for the analysis of bias in machine learning algorithms
Jean-Michel Loubes
Speaker bioHe also holds the ‘Trust in Artificial Intelligence’ Chair at the AI research centre Artificial and Natural Intelligence Toulouse Institute (ANITI), where he conducts research on the issues of AIS auditing, bias and robustness in AI.
He obtained a PhD in applied mathematics from the University of Toulouse III in 2000. He then held research posts at the CNRS, Université Paris-Sud and Université Montpellier II, before being appointed professor in Toulouse in 2007.
Alongside his academic activities, Jean-Michel Loubes has been involved in bringing together research and the socio-economic world. He was regional manager for the Occitanie region of the CNRS’s Agence de Valorisation des Mathématiques (AMIES) from 2010 to 2016. He was a member of the Conseil National des Universités in mathematics, the Conseil Scientifique of the Institut des Mathématiques of the CNRS and the jury of the Agence Nationale de la Recherche in AI.
He is also co-inventor of several patents relating to applications of machine learning to biology or to the detection of anomalies and biases.

Dark patterns on the web and in AI systems
Sanju Ahuja
Speaker bio
A (hands on) introduction to mechanisitic interpretability
Simon Ostermann
Delivered with Tanja Bäumel
Speaker bio
A (hands on) introduction to mechanisitic interpretability
Tanja Bäumel
Delivered with Simon Ostermann
Speaker bio