Track A: Trusted AI

Track A: Trusted AI

The development of Deep Learning has transformed AI from a niche science into a socially relevant “mega-technology”. At the same time, it raises a range of problems, such as the lack of internal representation of meaning (interpretability), sensitivity to changes in the input (robustness), lack of transferability to unseen use cases (generalizability), potential discrimination and biases (fairness) and, finally, the big data hunger itself (data efficiency). Recently, a new overall approach to solving these problems has been pushed forward under the term “Trusted AI” or “Trustworthy AI”. The Trusted AI track will cover the latest advances in this area.

Confirmed speakers

Michael Luck (King's College London)
Michael Luck is Professor of Computer Science in the Department of Informatics at King’s College London, where he also works in the Distributed Artificial Intelligence group, undertaking research into agent technologies and artificial intelligence. He is currently Director of the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence, Director of the King’s Institute for Artificial Intelligence, and scientific advisor for Aerogility, ContactEngine and CtheSigns. He is co Editor-in-Chief of the journal, Autonomous Agents, and Multi-Agent Systems, and is a Fellow of the European Association for Artificial Intelligence and of the British Computer Society.

Artificial Intelligence: Towards safety and trust

Artificial Intelligence is increasingly prevalent or proposed, with the potential to fundamentally change many aspects of our lives, yet it has been around for more than 50 years. So, what’s new and why do we need to pay particular attention to AI more than ever now? In this talk I will give a brief review of how we understand AI, now and in the past, and give a little historical perspective, before raising some important questions and issues that merit important consideration today. I will suggest a particular focus on the need to address issues of safety and trust if there is to be wider deployment and adoption in many areas and review some recent developments.
Oana Goga (CNRS - LIG)
Oana Goga is a Chargée de Recherches (equivalent to a tenured facutly position) at CNRS in the SLIDE team at Laboratoire d’Informatique de Grenoble, since October 2017. Prior to this she was a postdoc at the Max Planck Institute for Software Systems working with Krishna Gummadi. Oana Goga obtained her PhD in 2014 from the Pierre et Marie Curie University in Paris under the supervision of Renata Teixeira.

Security and privacy issues with social computing and online advertising

TBA
Caterina Urban (Inria)
Caterina is a research scientist in the Inria research team ANTIQUE (ANalise StaTIQUE), working on static analysis methods and tools to enhance the reliability and our understanding of data science and machine learning software. She is Italian and studied for her Bachelor’s (2009) and a Master’s (2011) degree in Computer Science at the University of Udine. She then moved to France and completed her Ph.D. (2015) in Computer Science, working under the joint supervision of Radhia Cousot and Antoine Miné at École Normale Supérieure. Before joining Inria (2019), she was a postdoctoral researcher at ETH Zurich in Switzerland.

Formal methods for machine learning

Formal methods can provide rigorous correctness guarantees on hardware and software systems. Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process. As machine learning is becoming more and more popular, machine-learned components are now considered for inclusion in critical systems. This raises the question of their safety and their verification. Yet, established formal methods are limited to classic, i.e. non machine-learned software. Applying formal methods to verify systems that include machine learning has only been considered recently and poses novel challenges in soundness, precision, and scalability. In this lecture, we will provide an overview of the formal methods developed so far for machine learning, highlighting their strengths and limitations. The large majority of them verify trained feed-forward ReLU-activated neural networks and employ either SMT, optimization, or abstract interpretation techniques. We will present several approaches through the lens of different robustness, safety, and fairness properties, with a focus on abstract interpretation-based techniques. We will also discuss formal methods for support vector machines and decision tree ensembles, as well as methods targeting the training process of machine learning models. We will then conclude by offering perspectives for future research directions in this context.
Sophie Quinton (Inria)
Sophie Quinton is a research scientist in computer science at INRIA in Grenoble, France. Her research background is on formal methods for the design and verification of embedded systems, with an emphasis on real-time aspects. She is now studying the environmental impact of ICT, in particular on claims about the benefits of using digital technologies for GHG emissions mitigation.

ICT and sustainability

The urgent need to mitigate climate change is one possible reason for explaining why the environmental impact of Information and Communication Technologies (ICT) is receiving more and more attention. Beyond climate change, this raises the broader question of what part ICT could play in a sustainable society or to help us build it. In this talk I will strive to provide an overview of the state of the art on environmental impacts of ICT and existing methods to assess them. I will first introduce Life Cycle Analysis, a method that assesses multiple categories of impacts of the product under study across all the stages of its life cycle. In the second part of the talk, we will focus on the rebound effect (the fact that making a process more efficient tends to increase its use) and structural impacts of ICT (i.e., the environmental impacts resulting from how digital technologies reshape society). I will conclude by discussing limitations of quantitative approaches for assessing such indirect impacts, and ethical issues that this raises for researchers in computer science.
Martin Georg Fränzle (University of Oldenburg)
Martin Fränzle has been the Professor for Hybrid Systems within the Department of Computing Science at the University of Oldenburg since 2004 and for Foundations and Application of Systems of Cyber-Physical Systems since 2022. He holds a diploma and a doctoral degree in Computer Science from Kiel University and was Associate Professor (2002-2004) and Velux Visiting Professor (2006-2008) at the Technical University of Denmark (DTU), Dean of the School of Computing Science, Business Administration, Economics, and Law at Oldenburg, and recently the Vice President for Research, Transfer, and Digitalization at the University of Oldenburg. His research spans a scope from fundamental research, in particular dynamic semantics and decidability issues of formal models of cyber-physical systems, over technology development addressing tools for the modelling, automated verification, and synthesis of cyber-physical and human-cyber-physical system designs to applied research as well as technology transfer with automotive and railway industries as well as design-tool vendors, the latter being based on numerous industrial cooperation projects.

AI components for high integrity, safety-critical cyber-physical systems: chances and risks

Smart cities, automated transportation systems, smart grids, smart health, and Industry 4.0 — the key technologies setting out to shape our future rest on cyber-enabled physical environments. Elements of intelligence, cooperation, and adaptivity are added to all kinds of physical entities by means of information technology, with artificial intelligence increasingly being an integral part. But many of these cyber-physical systems (CPS) operate in highly safety-critical domains or are themselves safety-critical, inducing safety concerns about the embedded software and AI, as their misfunctions or misconceptions and misinterpretations of environmental state and intent gain direct physical impact due to the cyber-physical nature of the system. Embedding AI components, especially those based on machine learning, into such systems consequently bear chances and risks – and only if we are able to rigorously control the latter will be able to exploit the former in a justifiable and societally acceptable manner.  The lecture will first demonstrate the chances and risks induced by embedding AI components that are based on machine learning into CPS. We will therefore analyse different variants of integrating training and learning into the development process and of embedding AI into heterogeneous CPS architectures. Exploiting such architectural patterns, we will discuss key types of expected industrial applications and identify their particular safety requirements at system level, derive the ones induced on component level, and quantify the pertinent safety targets. This provides a rigorous basis for relating the state of the art in semiformal and formal analysis techniques for such systems to the societal demand. We will explain and characterize existent verification and validation approaches accordingly.
André Meyer-Vitali (DFKI)
Dr. André Meyer-Vitali is a computer scientist who got his Ph.D. in software engineering and distributed AI from the University of Zürich. He worked on many applied research projects on multi-agent systems at Philips Research and TNO (The Netherlands) and participated in AgentLink. He also worked at the European Patent Office. Currently, he is a senior researcher at DFKI (Germany) focused on engineering and promoting Trusted AI and is active in the AI networks TAILOR and CLAIRE. His research interests include Software und Knowledge Engineering, Design Patterns, Neuro-Symbolic AI, Causality, and Agent-based Social Simulation (ABSS) with the aim to create Trust by Design.

Trustworthy hybrid team decision-support

The aim to empower human users of artificially intelligent systems becomes paramount when considering the coordination and collaboration in hybrid teams of humans and autonomous agents. Hereby, we consider not only one-to-one interactions, but also many-to-many situations (multiple humans and multiple agents), where we strive to make use of their complementary capabilities. Therefore, mutual awareness of each other’s strengths and weaknesses is crucial for beneficial coordination. Each person and agent has individual knowledge, facilities, roles, capabilities, expectations and intentions. It should be clear for each of them what to expect from each other, in order to avoid misleading anthropomorphism, and how to delegate which tasks to whom. To address these goals, and in accordance with a hybrid theory of mind, we propose the use of trustworthy interaction patterns and epistemic orchestration with intentions and causal models.
Freddy Lecue (Thales & Inria)
Freddy Lecue is the Chief AI Scientist at CortAIx (Centre of Research & Technology in AI eXpertise) at Thales in Montreal – Canada. He is also a research associate at Inria, in WIMMICS, Sophia Antipolis – France. Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior joining Accenture he was a research scientist, lead investigator in large scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at The University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008. His research area is at the frontier of intelligent / learning and reasoning systems. He has a strong interest on Explainable AI i.e., AI systems, models and results which can be explained to human and business experts.

Explainable AI: a focus on machine learning and knowledge graph-based approaches

The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. XAI (eXplainable AI) aims at addressing such challenges by combining the best of symbolic AI and traditional Machine Learning. This topic has been studied for years by all different communities of AI, with different definitions, evaluation metrics, motivations and results. This session is a snapshot on the work of XAI to date, and surveys the work achieved by the AI community with a focus on machine learning and symbolic AI related approaches. We will motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques, with best XAI coding practices. In the first part of the tutorial, we give an introduction to the different aspects of explanations in AI. We then focus the tutorial on two specific approaches: (i) XAI using machine learning, (ii) XAI using a combination of graph-based knowledge representation and machine learning. We will get into the specifics of each approach, the state of the art and the research challenges for the next steps. This will include visiting the related problem of interpretability, one of the goals in trustworthy AI. The final part of the tutorial gives an overview of real-world applications of XAI as well as best XAI coding practices.
Titouan Vayer (ENS Lyon)
Titouan Vayer is currently a postdoctoral researcher at ENS Lyon and works on compressive learning problems. He worked during his thesis, which was obtained in 2020 in IRISA, Vannes, on optimal transport methods for machine learning: In particular in the context of graphs and heterogeneous data. Titouan Vayer is particularly interested in the theory of learning in complex settings where the data are large, structured and do not admit the same representation.

Less is more? How compressive learning and sketching for large-scale machine learning works

Large-scale machine learning faces nowadays a number of computational challenges, due to the high dimensionality of data and, often, very large training collections. In addition, there are data privacy issues and potentially data organization issues (e.g. data distributed on several servers without centralization).
In this course Titouan Vayer will present a potential remedy to these problems namely the compressive learning framework. The central idea is to summarize a database in a single vector, called sketch, obtained by computing carefully chosen nonlinear random features (e.g., random Fourier features) and averaging them over the whole dataset. The parameters of a machine learning model are then learned from the sketch, without access to the original dataset. This course surveys the current state-of-the-art in compressive learning, including the main concepts and algorithms, their connections with established signal-processing methods, existing theoretical guarantees—on both information preservation and privacy preservation, and important open problems.
Rafaël Pinot (EPFL)
Rafaël is currently a postdoctoral researcher at EPFL working with Pr. Rachid Guerraoui and Pr. Anne-Marie Kermarrec within the Ecocloud Research Center. He holds a PhD in Computer Science from Université Paris Dauphine-PSL. His main line of research is in statistical machine learning and optimization with a focus on the security and privacy of machine learning applications. He is also interested in statistical analysis of complex data structures.

Are neural networks getting smarter? State of knowledge on adversarial examples in machine learning

Machine learning models are part of our everyday life and their security and reliability weaknesses can be used to harm us either directly or indirectly. It is thus crucial to be able to account for, and deal with, any new vulnerabilities. Besides, the legal framework in Europe is evolving, forcing practitioners, from both the private and the public sectors, to adapt quickly to these new concerns. In this lecture, we will review the current state of knowledge on how to build safer machine learning models. Specifically, we will focus on an important security concern, namely adversarial example attacks. The vulnerability of state-of-the-art models to these attacks has genuine security implications especially when models are used in AI-driven technologies, e.g., for self-driving cars or fraud detection. Besides security issues, these attacks show how little we know about the models used every day in the industry, and how little control we have over them. We provide some insights explaining how these attacks work, and how to mitigate them by using some notions of learning theory and probability theory.

Comments are closed.