Track A: Trusted AI

Track A: Trusted AI

The development of Deep Learning has transformed AI from a niche science into a socially relevant “mega-technology”. At the same time, it raises a range of problems, such as the lack of internal representation of meaning (interpretability), sensitivity to changes in the input (robustness), lack of transferability to unseen use cases (generalizability), potential discrimination and biases (fairness) and, finally, the big data hunger itself (data efficiency). Recently, a new overall approach to solving these problems has been pushed forward under the term “Trusted AI” or “Trustworthy AI”. The Trusted AI track will cover the latest advances in this area.

Timetable – Speaker

Mon, August 29, 2022

Opening Speech – 13:15-14:15

Course 1 – 14:30-17:15

Catuscia Palamidessi and Sayan Biswas (Inria)
Catuscia Palamidessi is Director of Research at Inria Saclay. She has been Full Professor at the University of Genova and Penn State University. Palamidessi’s research interests include Machine Learning, Privacy, Fairness, Secure Information Flow, and Concurrency. In 2019 she obtained an Advanced Grant from the European Research Countil for the project “Hypatia”, which focuses on identifying methods for local differential privacy offering an optimal trade-off with quality of service and statistical utility. She is in the Editorial board of various journals, including the IEEE Transactions on Dependable and Secure Computing, the Journal of Computer Security, Mathematical Structures in Computer Science, and Acta Informatica. She is member of the advising committee of the French National Information Systems Security Agency (ANSSI).

Sayan Biswas is a second-year doctoral candidate at Inria Saclay and Ecole Polytechnique in France being supervised by Catuscia Palamidessi. Born and raised in Kolkata, India, Sayan obtained his undergraduate and master’s degree with first-class honours from University of Bath in the UK, where he specialized in probability theory and statistics. He has been participating in mathematics and competitive programming contests and olympiads from a very young age. His present research interests include differential privacy, privacy-utility optimization, privacy-preserving machine learning, and federated learning.

Privacy and fairness

In this course, I will speak about two main ethical issues in machine learning, namely privacy and fairness. The issue of privacy arises in particular concerning the sensitive data in the training set: by exploring the machine learning model (white box attack) or simply by querying it (black box attack), it is possible to retrieve this sensitive information with a high degree of probability. The issue of fairness arises mainly because the decisions proposed by the model are based on correlation, which can be a source of bias. In my course, I will present differential privacy, which is one of the most popular frameworks for privacy, I will discuss how it can be applied in machine learning, and I will show the relation with generalization and accuracy. Then I will present various notions of fairness, I will discuss how they can be implemented for machine learning, and will show some known results about the relation between fairness, accuracy, and privacy.
At the end of the course, there will be a hands-on session, coordinated by Sayan Biswas, with exercises on the application of differential privacy to machine learning.

Tue, August 30, 2022

Keynote 1 – 09:00-10:00

Michael Luck (King's College London)
Michael Luck is Professor of Computer Science in the Department of Informatics at King’s College London, where he also works in the Distributed Artificial Intelligence group, undertaking research into agent technologies and artificial intelligence. He is currently Director of the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence, Director of the King’s Institute for Artificial Intelligence, and scientific advisor for Aerogility, ContactEngine and CtheSigns. He is co Editor-in-Chief of the journal, Autonomous Agents, and Multi-Agent Systems, and is a Fellow of the European Association for Artificial Intelligence and of the British Computer Society.

Artificial Intelligence: Towards safety and trust

Artificial Intelligence is increasingly prevalent or proposed, with the potential to fundamentally change many aspects of our lives, yet it has been around for more than 50 years. So, what’s new and why do we need to pay particular attention to AI more than ever now? In this talk I will give a brief review of how we understand AI, now and in the past, and give a little historical perspective, before raising some important questions and issues that merit important consideration today. I will suggest a particular focus on the need to address issues of safety and trust if there is to be wider deployment and adoption in many areas and review some recent developments.
Course 2 – 10:30-13:15

 

Vera Sosnovik and Salim Chouaki (CNRS)

Vera Sosnovik is a third-year PhD student at University Grenoble Alpes working with Dr Oana Goga. She graduated from University Grenoble Alpes in 2019 with a Master Degree in Data Science. Her work focuses on detecting and studying problematic ads in social media and assesses the impact they have on users.

Salim Chouaki is a second year PhD student at University Grenoble Alpes. He graduated in 2020 with an engineering degree in computer systems and software. He works on analysing risks associated with incidental and targeted exposure to information on social media using the CheckMyNews chrome extension that he has developed to collect data from Facebook.

Security and privacy issues with social computing and online advertising

The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Ad platforms collect a large amount of data on users and use powerful AI-driven algorithms to infer users’ fine-grain interests and demographics, which they make available to advertisers to target users.
While the marketing benefits are clear, these targeting technologies have also brought new risks for individuals and society. For example, advertisers such as Cambridge Analytica have maliciously used these targeting features to manipulate users in the context of elections.
In this lecture, I will provide an overview of how targeted advertising works, and I will describe four types of studies that are useful to tackle security, privacy, and algorithmic risks: audit studies, attack studies, measurement & behaviour studies, and algorithmic & system design studies.

In the lab, we will work on code to collect data from public online advertising APIs and build algorithms to analyse this data. 

Course 3 – 15:45-18:30

 

Caterina Urban (Inria)
Caterina is a research scientist in the Inria research team ANTIQUE (ANalise StaTIQUE), working on static analysis methods and tools to enhance the reliability and our understanding of data science and machine learning software. She is Italian and studied for her Bachelor’s (2009) and a Master’s (2011) degree in Computer Science at the University of Udine. She then moved to France and completed her Ph.D. (2015) in Computer Science, working under the joint supervision of Radhia Cousot and Antoine Miné at École Normale Supérieure. Before joining Inria (2019), she was a postdoctoral researcher at ETH Zurich in Switzerland.

Formal methods for machine learning

Formal methods can provide rigorous correctness guarantees on hardware and software systems. Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process. As machine learning is becoming more and more popular, machine-learned components are now considered for inclusion in critical systems. This raises the question of their safety and their verification. Yet, established formal methods are limited to classic, i.e. non machine-learned software. Applying formal methods to verify systems that include machine learning has only been considered recently and poses novel challenges in soundness, precision, and scalability. In this lecture, we will provide an overview of the formal methods developed so far for machine learning, highlighting their strengths and limitations. The large majority of them verify trained feed-forward ReLU-activated neural networks and employ either SMT, optimization, or abstract interpretation techniques. We will present several approaches through the lens of different robustness, safety, and fairness properties, with a focus on abstract interpretation-based techniques. We will also discuss formal methods for support vector machines and decision tree ensembles, as well as methods targeting the training process of machine learning models. We will then conclude by offering perspectives for future research directions in this context.

Wed, August 31, 2022

Keynote 2 – 09:00-10:00

Sophie Quinton (Inria)
Sophie Quinton is a research scientist in computer science at INRIA in Grenoble, France. Her research background is on formal methods for the design and verification of embedded systems, with an emphasis on real-time aspects. She is now studying the environmental impact of ICT, in particular on claims about the benefits of using digital technologies for GHG emissions mitigation.

ICT and sustainability

The urgent need to mitigate climate change is one possible reason for explaining why the environmental impact of Information and Communication Technologies (ICT) is receiving more and more attention. Beyond climate change, this raises the broader question of what part ICT could play in a sustainable society or to help us build it. In this talk I will strive to provide an overview of the state of the art on environmental impacts of ICT and existing methods to assess them. I will first introduce Life Cycle Analysis, a method that assesses multiple categories of impacts of the product under study across all the stages of its life cycle. In the second part of the talk, we will focus on the rebound effect (the fact that making a process more efficient tends to increase its use) and structural impacts of ICT (i.e., the environmental impacts resulting from how digital technologies reshape society). I will conclude by discussing limitations of quantitative approaches for assessing such indirect impacts, and ethical issues that this raises for researchers in computer science.
Course 4 – 10:30-13:15

 

Martin Georg Fränzle (University of Oldenburg)
Martin Fränzle has been the Professor for Hybrid Systems within the Department of Computing Science at the University of Oldenburg since 2004 and for Foundations and Application of Systems of Cyber-Physical Systems since 2022. He holds a diploma and a doctoral degree in Computer Science from Kiel University and was Associate Professor (2002-2004) and Velux Visiting Professor (2006-2008) at the Technical University of Denmark (DTU), Dean of the School of Computing Science, Business Administration, Economics, and Law at Oldenburg, and recently the Vice President for Research, Transfer, and Digitalization at the University of Oldenburg. His research spans a scope from fundamental research, in particular dynamic semantics and decidability issues of formal models of cyber-physical systems, over technology development addressing tools for the modelling, automated verification, and synthesis of cyber-physical and human-cyber-physical system designs to applied research as well as technology transfer with automotive and railway industries as well as design-tool vendors, the latter being based on numerous industrial cooperation projects.

AI components for high integrity, safety-critical cyber-physical systems: chances and risks

Smart cities, automated transportation systems, smart grids, smart health, and Industry 4.0 — the key technologies setting out to shape our future rest on cyber-enabled physical environments. Elements of intelligence, cooperation, and adaptivity are added to all kinds of physical entities by means of information technology, with artificial intelligence increasingly being an integral part. But many of these cyber-physical systems (CPS) operate in highly safety-critical domains or are themselves safety-critical, inducing safety concerns about the embedded software and AI, as their misfunctions or misconceptions and misinterpretations of environmental state and intent gain direct physical impact due to the cyber-physical nature of the system. Embedding AI components, especially those based on machine learning, into such systems consequently bear chances and risks – and only if we are able to rigorously control the latter will be able to exploit the former in a justifiable and societally acceptable manner.  The lecture will first demonstrate the chances and risks induced by embedding AI components that are based on machine learning into CPS. We will therefore analyse different variants of integrating training and learning into the development process and of embedding AI into heterogeneous CPS architectures. Exploiting such architectural patterns, we will discuss key types of expected industrial applications and identify their particular safety requirements at system level, derive the ones induced on component level, and quantify the pertinent safety targets. This provides a rigorous basis for relating the state of the art in semiformal and formal analysis techniques for such systems to the societal demand. We will explain and characterize existent verification and validation approaches accordingly.
Course 5 – 15:45-18:30

 

André Meyer-Vitali (DFKI)
Dr. André Meyer-Vitali is a computer scientist who got his Ph.D. in software engineering and distributed AI from the University of Zürich. He worked on many applied research projects on multi-agent systems at Philips Research and TNO (The Netherlands) and participated in AgentLink. He also worked at the European Patent Office. Currently, he is a senior researcher at DFKI (Germany) focused on engineering and promoting Trusted AI and is active in the AI networks TAILOR and CLAIRE. His research interests include Software und Knowledge Engineering, Design Patterns, Neuro-Symbolic AI, Causality, and Agent-based Social Simulation (ABSS) with the aim to create Trust by Design.

Trustworthy hybrid team decision-support

The aim to empower human users of artificially intelligent systems becomes paramount when considering the coordination and collaboration in hybrid teams of humans and autonomous agents. Hereby, we consider not only one-to-one interactions, but also many-to-many situations (multiple humans and multiple agents), where we strive to make use of their complementary capabilities. Therefore, mutual awareness of each other’s strengths and weaknesses is crucial for beneficial coordination. Each person and agent has individual knowledge, facilities, roles, capabilities, expectations and intentions. It should be clear for each of them what to expect from each other, in order to avoid misleading anthropomorphism, and how to delegate which tasks to whom. To address these goals, and in accordance with a hybrid theory of mind, we propose the use of trustworthy interaction patterns and epistemic orchestration with intentions and causal models.

Thu, Sep 01, 2022

Keynote 3 – 09:00-10:00

Marcus Voß (Birds on Mars)
Marcus Voß is an AI Expert and Intelligence Architect at Birds on Mars, where he works on AI applications for sustainable use cases. He is an external lecturer on AI and data science at TU Berlin and CODE University of Applied Sciences. Previously, he was a Research Associate at TU Berlin, where he led the research group “Smart Energy Systems” at the DAI Lab. He is active as Community Lead for the building and transportation sector at Climate Change AI, an international initiative that brings together stakeholders around AI and climate change.

Applying Artificial Intelligence for climate action

Artificial Intelligence and Machine Learning provide powerful tools to tackle climate change in various applications. They can support climate change mitigation, for instance, by helping reduce greenhouse gas emissions within various applications. They can help adapt to a changing climate and even advance climate science itself. However, AI and ML are not silver bullets and can always only be one part of the solution. This talk provides an overview of the strengths and weaknesses of ML, some example applications, and recurring themes. It presents applications in the energy and transportation sectors and the joint project QTrees.ai of the Technologiestiftung Berlin, Straßen- und Grünflächenamt Berlin and Birds on Mars that aims to implement solutions to support the effective watering and care of city trees.
Course 6 – 10:30-13:15

 

Freddy Lecue (Thales & Inria)
Freddy Lecue is the Chief AI Scientist at CortAIx (Centre of Research & Technology in AI eXpertise) at Thales in Montreal – Canada. He is also a research associate at Inria, in WIMMICS, Sophia Antipolis – France. Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior joining Accenture he was a research scientist, lead investigator in large scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at The University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008. His research area is at the frontier of intelligent / learning and reasoning systems. He has a strong interest on Explainable AI i.e., AI systems, models and results which can be explained to human and business experts.

Explainable AI: a focus on machine learning and knowledge graph-based approaches

The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. XAI (eXplainable AI) aims at addressing such challenges by combining the best of symbolic AI and traditional Machine Learning. This topic has been studied for years by all different communities of AI, with different definitions, evaluation metrics, motivations and results. This session is a snapshot on the work of XAI to date, and surveys the work achieved by the AI community with a focus on machine learning and symbolic AI related approaches. We will motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques, with best XAI coding practices. In the first part of the tutorial, we give an introduction to the different aspects of explanations in AI. We then focus the tutorial on two specific approaches: (i) XAI using machine learning, (ii) XAI using a combination of graph-based knowledge representation and machine learning. We will get into the specifics of each approach, the state of the art and the research challenges for the next steps. This will include visiting the related problem of interpretability, one of the goals in trustworthy AI. The final part of the tutorial gives an overview of real-world applications of XAI as well as best XAI coding practices.
Course 7 – 15:45-18:30

 

Titouan Vayer (ENS Lyon)
Titouan Vayer is currently a postdoctoral researcher at ENS Lyon and works on compressive learning problems. He worked during his thesis, which was obtained in 2020 in IRISA, Vannes, on optimal transport methods for machine learning: In particular in the context of graphs and heterogeneous data. Titouan Vayer is particularly interested in the theory of learning in complex settings where the data are large, structured and do not admit the same representation.

Less is more? How compressive learning and sketching for large-scale machine learning works

Large-scale machine learning faces nowadays a number of computational challenges, due to the high dimensionality of data and, often, very large training collections. In addition, there are data privacy issues and potentially data organization issues (e.g. data distributed on several servers without centralization).
In this course Titouan Vayer will present a potential remedy to these problems namely the compressive learning framework. The central idea is to summarize a database in a single vector, called sketch, obtained by computing carefully chosen nonlinear random features (e.g., random Fourier features) and averaging them over the whole dataset. The parameters of a machine learning model are then learned from the sketch, without access to the original dataset. This course surveys the current state-of-the-art in compressive learning, including the main concepts and algorithms, their connections with established signal-processing methods, existing theoretical guarantees—on both information preservation and privacy preservation, and important open problems.

Fri, Sep 02, 2022

Course 8 – 09:00-12:00

Rafaël Pinot (EPFL)
Rafaël is currently a postdoctoral researcher at EPFL working with Pr. Rachid Guerraoui and Pr. Anne-Marie Kermarrec within the Ecocloud Research Center. He holds a PhD in Computer Science from Université Paris Dauphine-PSL. His main line of research is in statistical machine learning and optimization with a focus on the security and privacy of machine learning applications. He is also interested in statistical analysis of complex data structures.

Are neural networks getting smarter? State of knowledge on adversarial examples in machine learning

Machine learning models are part of our everyday life and their security and reliability weaknesses can be used to harm us either directly or indirectly. It is thus crucial to be able to account for, and deal with, any new vulnerabilities. Besides, the legal framework in Europe is evolving, forcing practitioners, from both the private and the public sectors, to adapt quickly to these new concerns. In this lecture, we will review the current state of knowledge on how to build safer machine learning models. Specifically, we will focus on an important security concern, namely adversarial example attacks. The vulnerability of state-of-the-art models to these attacks has genuine security implications especially when models are used in AI-driven technologies, e.g., for self-driving cars or fraud detection. Besides security issues, these attacks show how little we know about the models used every day in the industry, and how little control we have over them. We provide some insights explaining how these attacks work, and how to mitigate them by using some notions of learning theory and probability theory.

Collaborative Wrap-Up – 12:15-13:15

Comments are closed.