Further details about this track are yet to be announced.
Deep Ethics
Maximilian Kiener
Speaker bio
Maximilian Kiener is Head of the Institute for Ethics in Technology at Hamburg University of Technology and a Research Associate at the Uehiro Institute, University of Oxford. Maximilian’s research specialises in moral and legal philosophy, with a particular focus on consent, responsibility, and artificial intelligence. His work explores the integration of ethical principles at every stage of AI development and use. Before joining Hamburg University of Technology, Maximilian completed both the BPhil (2017, Master’s degree) and DPhil (2019, PhD) in Philosophy at the University of Oxford. He then served as an Extraordinary Junior Research Fellow (2019–2021) and a Leverhulme Early Career Fellow (2021–2022) at Oxford.
Marija Slavkovik
Speaker bio
Marija Slavkovik is a full professor in artificial intelligence at the University of Bergen. Her area of expertise is collective reasoning and decision making, as well as ethics. She is the vice president of the EurAI and a co-editor of the AI and society track of the Journal of AI Research.
Nadine Schlicker
Speaker bio
Nadine Schlicker is a psychologist and PhD candidate at the Institute for AI in Medicine at Marburg University. She specializes in human-centered design, bridging experience from both industry and academia. She has worked in industry roles focusing on usability and user experience research, whileher academic work centers on fair and trustworthy AI — particularly how users assess the trustworthiness of AI in medical contexts. Her interdisciplinary research spans Human Factors, Human-Computer Interaction (HCI), Medicine, and Organizational Psychology, employing both quantitative and qualitative experimental designs. Committed to interdisciplinary collaboration, she has published alongside researchers from computer science, mathematics, philosophy, psychology, medicine, and the social sciences. A key focus of her work is understanding trust and its related constructs, with particular dedication to the development and application of the Trustworthiness Assessment Model (TrAM) and its implications for the concept of calibrated trust.
Serena Villata
Speaker bio
Serena Villata is a senior researcher (Directrice de recherche) in computer science at the CNRS and pursues her research at the I3S laboratory. She is the head of the joint Inria-CNRS-UCA MARIANNE team. Her research area is Artificial Intelligence (AI), and her current work focuses on computational argumentation considering both reasoning on computational models of argument and mining natural arguments from text (argument mining). She is particularly interested in multidisciplinary applications of argumentation to the legal domain, to medical texts, to political debates and to social networks. In November 2021, she was awarded with the Prix Inria – Académie des sciences jeunes chercheurs et jeunes chercheuses. Since July 2019, Serena has been awarded with a Chair in Artificial Intelligence at the Interdisciplinary Institute for Artificial Intelligence 3IA Côte d’Azur on “Artificial Argumentation for Humans”. Since January 2021, she is the Deputy Scientific Director of the 3IA Côte d’Azur Institute. Since December 2019, she is a member of the National Pilot Committee for Digital Ethics (CNPEN), and a member of the Scientific and Ethical Committee of the Cancer Data Platform of the Institut National du Cancer.
Elisa Fromont
Speaker bio
I am a full professor at Université de Rennes France, since 2017 and a Junior member of the Institut Universitaire de France (IUF) (2019-2024). I work at the IRISA research institute where I lead the Inria MALT (“Machine Learning with Temporal Constraints”) team.
From 2008 until 2017, I was associate professor at Université Jean Monnet in Saint-Etienne, France. I worked at the Hubert Curien research institute in the Data Intelligence team. I received my Research Habilitation (HDR) in 2015 from the University of Saint-Etienne.
From 2006 until 2008 I was a postdoctoral researcher in the Machine Learning group of the KU Leuven, Belgium.
I received my PhD in 2005 from Université de Rennes 1.
My primary research focus lies in developing machine learning algorithms tailored for temporal data or scenarios where time plays a crucial role in the machine learning process. To achieve this, I strive to create models that are not only effective but also trustworthy by being explainable, ensuring user privacy, promoting fairness, minimizing computational resources, and guaranteeing robustness.
Eva Giboulot
Speaker bio
Eva Giboulot is an Inria researcher at the IRISA laboratory, specifically in the Artishau Team.
She currently works on the security of AI systems, with an emphasis on the problem of detecting generated content using watermarking and forensics methods. More generally, she deals with settings involving the design and detection of weak signals: adversarial examples, backdoor injection and detection, model security… She is part of the IEEE Information Forensics and Security Technical Committee as an expert in generalization problems in steganography and steganalysis.
Michaël Perrot
Speaker bio
Michaël Perrot is a researcher in the Inria Centre at the University of Lille, France, since 2020. He obtained his PhD in 2016 from the University of Saint-Etienne, France. He then did two post-docs, a bit more than 2 years in the Statistical Learning Theory independent research group in the Max Planck Institute for Intelligent Systems in Tuebingen, Germany and a bit less than a year in the Data Intelligence Team in the Laboratoire Hubert Curien in Saint-Etienne, France. His research focuses on trustworthy machine learning with an emphasis on fairness. He is particularly interested in the interplay between fairness and other concepts such as federated learning or privacy preservation.
Jean-Michel Loubes
Speaker bio
Jean-Michel Loubes is a French mathematician and Director of Research at INRIA in statistics and machine learning. He is a member of the Institut de Mathématiques de Toulouse (IMT) and the Toulouse School of Economics (TSE). His research interests include mathematical statistics, machine learning, complex systems and optimal transport, as well as the robustness and fairness of artificial intelligence systems. He is part of INRIA’s Regalia AI regulation project.
He also holds the ‘Trust in Artificial Intelligence’ Chair at the AI research centre Artificial and Natural Intelligence Toulouse Institute (ANITI), where he conducts research on the issues of AIS auditing, bias and robustness in AI.
He obtained a PhD in applied mathematics from the University of Toulouse III in 2000. He then held research posts at the CNRS, Université Paris-Sud and Université Montpellier II, before being appointed professor in Toulouse in 2007.
Alongside his academic activities, Jean-Michel Loubes has been involved in bringing together research and the socio-economic world. He was regional manager for the Occitanie region of the CNRS’s Agence de Valorisation des Mathématiques (AMIES) from 2010 to 2016. He was a member of the Conseil National des Universités in mathematics, the Conseil Scientifique of the Institut des Mathématiques of the CNRS and the jury of the Agence Nationale de la Recherche in AI.
He is also co-inventor of several patents relating to applications of machine learning to biology or to the detection of anomalies and biases.
Sanju Ahuja
Speaker bio
I am a postdoctoral researcher in the PRIVATICSteam at the Inria Centre at Université Côte d’Azur. I have a background in human-computer interaction (HCI) and my current research focuses on privacy interfaces on the web. My main interests lie in the identification of dark patterns in privacy contexts (and also beyond), and an evaluation of how they impact user autonomy. Through interdisciplinary collaborations with scholars from HCI, CS and law, one of my goals is to advance on methodologies and frameworks that can support the regulation of dark patterns. In addition, I also conduct research on usability of privacy interfaces, evaluating existing interface designs and proposing design recommendations for ‘easy-to-use’ interfaces that can facilitate users to exercise their legal rights over their personal data.
Simon Ostermann
Speaker bio
Simon Ostermann is deputy head, senior researcher and team lead at the Multilinguality and Language Technology lab at the German Research Center for Artificial Intelligence (DFKI). He holds a PhD in natural language processing from Saarland University (2020), on the role of general knowledge in the form of scripts in question answering. His research interests are in transparent and robust natural language processing; with the goal to (1) make the parameters and behaviour of language models more explainable and understandable and (2) to improve especially language models in terms of data consumption and size. He is coordinating and participating in diverse projects on the European and national level and a co-lead of the competence center on generative AI, established at DFKI in 2023.
Tanja Bäumel
Speaker bio
Tanja Bäumel is a researcher in the Multilinguality and Language Technology Lab at the German Research Center for Artificial Intelligence. She has a background in computational linguistics, computer science and cognitive science, and is currently pursuing her PhD. Her research is in the field of explainable artificial intelligence (XAI), where she works on understanding and interpreting the inner workings of large-scale pre-trained language models, as well as their limitations. A specific focus of her work is on understanding the reasoning capabilities of pretrained models.
Héber H. Arcolezi
Speaker bio
Héber H. Arcolezi is a tenured research scientist at Inria Grenoble, France. He earned his Ph.D. in Computer Science from the Université Bourgogne Franche-Comté in 2022, followed by a postdoctoral fellowship at Inria Saclay and École Polytechnique. As an active member of the privacy research community, Héber serves on program committees for several top-tier conferences, including ACM CCS, PETS, USENIX Security, and ICLR. Héber’s current research interests are on differential privacy, responsible AI, and the intersection of privacy and fairness in machine learning.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.