Track B: Sustainable AI

Track B: Sustainable AI

Digital Europe and the Green Deal – thinking together the two megatrends of digitalization and sustainability is key. AI offers tremendous opportunities to help our society interact with nature in a sustainable way. At the same time, the environmental impact of AI itself cannot be ignored. The Sustainable AI track focuses on the measurement of that environmental impact and on the development of resource-efficient AI technologies. It covers all levels of AI technology development, from AI algorithms and programming frameworks down to hardware and compilation.

Timetable – Speaker

Mon, August 29, 2022

Opening Speech – 13:15-14:15

Course 1 – 14:30-17:15

Silviu-Ioan Filip (Inria)
Silviu Filip is an Inria researcher working in Rennes, France. He received his PhD in Computer Science from ENS Lyon in 2016 working on efficient and scalable algorithms for digital filter design and was a postdoctoral researcher at the Mathematical Institute in Oxford during 2017 working on numerical algorithms for rational approximations of functions. His research interests are centred around number format optimization problems stemming from various application fields such as scientific computing, digital signal processing and more recently, deep learning.

Tools for DNN quantization

Deep learning methods offer state-of-the-art results in many application areas but can in many cases prove to be resource heavy. One way to improve the situation is to work at the arithmetic level by modifying the number formats used to store network parameters and perform computations. By default, most DNNs are designed using 32-bit IEEE-754 floating-point arithmetic, but in many cases similar accuracy results can be achieved with a much smaller memory footprint, with binary neural networks (i.e., with parameters stored using just two values) being an extreme example of this. In this lecture/lab we will explore the various ways of quantizing DNNs to such low formats with minimal accuracy degradation and present some of the tools that are available to achieve this in well-known deep learning frameworks such as PyTorch and Tensorflow.

Tue, August 30, 2022

Keynote 1 – 09:00-10:00

Michael Luck (King's College London)
Michael Luck is Professor of Computer Science in the Department of Informatics at King’s College London, where he also works in the Distributed Artificial Intelligence group, undertaking research into agent technologies and artificial intelligence. He is currently Director of the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence, Director of the King’s Institute for Artificial Intelligence, and scientific advisor for Aerogility, ContactEngine and CtheSigns. He is co Editor-in-Chief of the journal, Autonomous Agents, and Multi-Agent Systems, and is a Fellow of the European Association for Artificial Intelligence and of the British Computer Society.

Artificial Intelligence: Towards safety and trust

Artificial Intelligence is increasingly prevalent or proposed, with the potential to fundamentally change many aspects of our lives, yet it has been around for more than 50 years. So, what’s new and why do we need to pay particular attention to AI more than ever now? In this talk I will give a brief review of how we understand AI, now and in the past, and give a little historical perspective, before raising some important questions and issues that merit important consideration today. I will suggest a particular focus on the need to address issues of safety and trust if there is to be wider deployment and adoption in many areas and review some recent developments.
Course 2 – 10:30-13:15

Olivier Sentieys (University of Rennes & Inria)
Olivier Sentieys is a Professor at the University of Rennes holding the Inria Research Chair on Energy-Efficient Computing Systems. He is leading the Taran team common to Inria and IRISA Laboratory. His research interests are in the area of computer architectures, computer arithmetic, embedded systems and signal processing, with a focus on system-level design, energy-efficient hardware accelerators, approximate computing, fault tolerance, and energy harvesting sensor networks.

Hardware accelerators for DNNs

Hardware accelerators are now mainstream to execute Deep Neural Network(DNN) models with a higher energy efficiency than general-purpose computing platforms. In addition to devices such as TPUs or GPUs, FPGAs further allow the architecture and arithmetic to be customized so that calculations can always be performed with just enough precision. In this lecture, we will first explore the design of customized fixed-point and floating-point arithmetic operators as a basic building block for constructing efficient, reduced-precision accelerators. The lecture will then review the architectures of the main hardware accelerators for DNN inference and training, as well as the design flows to generate these accelerators from high-level languages such as C++ or from stet-of-the-art DNN frameworks.
 
Course 3 – 15:45-18:30

Christoph Lüth (DFKI Bremen)
Christoph Lüth is vice director of the research department Cyber-Physical Systems group at the German Research Centre for Artificial Intelligence (DFKI) in Bremen, and professor for computer science at the University of Bremen. His research covers the whole area of formal methods, from theoretical foundations to tool development and applications in practical areas such as robotics. He has authored or co-authored over eighty peer-reviewed papers and was the principal investigator in several successful research projects in this area.

An Introduction to the RISC-V ISA

RISC-V is an open-source Instruction Set Architecture (ISA). It specifies a set of instructions a processor must implement, and their intended semantics. It is designed to be scalable, modular and free from patents and royalties. The course will give an introduction to the RISC-V ISA, look at models of the ISA (virtual prototypes, existing hardware), and discuss how to verify that a model satisfies the ISA specification.

Wed, August 31, 2022

Keynote 2 – 09:00-10:00

Sophie Quinton (Inria)
Sophie Quinton is a research scientist in computer science at INRIA in Grenoble, France. Her research background is on formal methods for the design and verification of embedded systems, with an emphasis on real-time aspects. She is now studying the environmental impact of ICT, in particular on claims about the benefits of using digital technologies for GHG emissions mitigation.

ICT and sustainability

The urgent need to mitigate climate change is one possible reason for explaining why the environmental impact of Information and Communication Technologies (ICT) is receiving more and more attention. Beyond climate change, this raises the broader question of what part ICT could play in a sustainable society or to help us build it. In this talk I will strive to provide an overview of the state of the art on environmental impacts of ICT and existing methods to assess them. I will first introduce Life Cycle Analysis, a method that assesses multiple categories of impacts of the product under study across all the stages of its life cycle. In the second part of the talk, we will focus on the rebound effect (the fact that making a process more efficient tends to increase its use) and structural impacts of ICT (i.e., the environmental impacts resulting from how digital technologies reshape society). I will conclude by discussing limitations of quantitative approaches for assessing such indirect impacts, and ethical issues that this raises for researchers in computer science.
Course 4 – 10:30-13:15

Richard Membarth (DFKI Saarbrücken & Technische Hochschule Ingolstadt)
Richard Membarth is a professor for system on a chip and AI for edge computing at the Technische Hochschule Ingolstadt (THI). He is also an affiliated professor at DFKI in Saarbrücken. Richard received the diploma degree in Computer Science from the Friedrich-Alexander University Erlangen-Nürnberg (FAU) and the postgraduate diploma in Computer and Information Sciences from the Auckland University of Technologies (AUT). In 2013, he received the PhD (Dr.-Ing.) degree from FAU on automatic code generation for GPU accelerators from a domain-specific language for medical imaging. After the PhD, he joined the Graphics Chair and the Intel Visual Computing Institute (IVCI) at Saarland University as a postdoctoral researcher. At the German Research Center for Artificial Intelligence (DFKI), he was a senior researcher and team leader for compiler technologies and high-performance computing. His research interests include parallel computer architectures and programming models with a focus on automatic code generation for a variety of architectures ranging from embedded systems to HPC installations for applications from image processing, computer graphics, scientific computing, and deep learning.

Code optimization via specialization

This course will present an approach to code optimization via specialization using the AnyDSL compiler framework. AnyDSL provides an imperative and functional language that allows to define domain abstractions using functions and provides a built-in partial evaluation engine to specialize programs at compile-time. This allows to cleanly separate a textbook-like algorithm description from its hardware-specific implementation on a particular platform such as CPUs, GPUs, and FPGAs. So far, AnyDSL has been successfully applied to a wide range of applications from image processing, rendering, bioinformatics, and molecular dynamics, beating state-of-the-art, hand-tuned implementations. During the course, we will look at practical examples on how we can create high-level abstractions relevant for deep learning.
Course 5 – 15:45-18:30

Anne-Laure Ligozat (ENSIIE & LISN)
Anne-Laure Ligozat is an associate professor in computer science at ENSIIE and LISN in Paris-Saclay, France. Her research interests are the environmental impacts of Information and Communication Technologies and in particular of Artificial Intelligence.

Carbon footprint of AI

In this course, I will explain why we should take into account the environmental impacts of AI, and its carbon footprint in particular. I will also present the different kinds of impacts it has, both coming from direct effects due to the lifecycle of equipment, and from indirect effects. I will finally give an overview of how to measure energy consumption of an AI program.

Thu, Sep 01, 2022

Keynote 3 – 09:00-10:00

Marcus Voß (Birds on Mars)
Marcus Voß is an AI Expert and Intelligence Architect at Birds on Mars, where he works on AI applications for sustainable use cases. He is an external lecturer on AI and data science at TU Berlin and CODE University of Applied Sciences. Previously, he was a Research Associate at TU Berlin, where he led the research group “Smart Energy Systems” at the DAI Lab. He is active as Community Lead for the building and transportation sector at Climate Change AI, an international initiative that brings together stakeholders around AI and climate change.

Applying Artificial Intelligence for climate action

Artificial Intelligence and Machine Learning provide powerful tools to tackle climate change in various applications. They can support climate change mitigation, for instance, by helping reduce greenhouse gas emissions within various applications. They can help adapt to a changing climate and even advance climate science itself. However, AI and ML are not silver bullets and can always only be one part of the solution. This talk provides an overview of the strengths and weaknesses of ML, some example applications, and recurring themes. It presents applications in the energy and transportation sectors and the joint project QTrees.ai of the Technologiestiftung Berlin, Straßen- und Grünflächenamt Berlin and Birds on Mars that aims to implement solutions to support the effective watering and care of city trees.
Course 6 – 10:30-13:15

Danilo Carastan dos Santos (Inria)
Danilo Carastan-Santos is a Post-doctoral researcher at the Laboratoire d’Informatique de Grenoble, France. Danilo received his PhD in 2019, in a double-degree between University Grenoble-Alpes, France, and the Federal University of ABC, Brazil. His thesis’ subject is on learning heuristics for resource management of High-Performance Computing (HPC) platforms. Danilo was a Post-doctoral researcher at the Federal University of Rio Grande do Sul, Brazil, in the subject of performance analysis and optimization of geophysics HPC applications. His research interests are in HPC resource management, parallel and distributed computing, sustainable computing, and Artificial Intelligence.

Measuring the energy consumption of AI

In this course I will explain how we can measure the energy consumption of AI code. I will first explain the different kinds of measurement methods (hardware and software), I will show how to instrument AI code with popular energy measurement software, and I will compare the measurements between hardware and software tools.
Course 7 – 15:45-18:30

Titouan Vayer (ENS Lyon)
Titouan Vayer is currently a postdoctoral researcher at ENS Lyon and works on compressive learning problems. He worked during his thesis, which was obtained in 2020 in IRISA, Vannes, on optimal transport methods for machine learning: In particular in the context of graphs and heterogeneous data. Titouan Vayer is particularly interested in the theory of learning in complex settings where the data are large, structured and do not admit the same representation.

Less is more? How compressive learning and sketching for large-scale machine learning works

Large-scale machine learning faces nowadays a number of computational challenges, due to the high dimensionality of data and, often, very large training collections. In addition, there are data privacy issues and potentially data organization issues (e.g. data distributed on several servers without centralization).
In this course Titouan Vayer will present a potential remedy to these problems namely the compressive learning framework. The central idea is to summarize a database in a single vector, called sketch, obtained by computing carefully chosen nonlinear random features (e.g., random Fourier features) and averaging them over the whole dataset. The parameters of a machine learning model are then learned from the sketch, without access to the original dataset. This course surveys the current state-of-the-art in compressive learning, including the main concepts and algorithms, their connections with established signal-processing methods, existing theoretical guarantees—on both information preservation and privacy preservation, and important open problems.

Fri, Sep 02, 2022

Course 8 – 09:00-12:00

Taner Topal (Managing Director Adap GmbH)

Taner is the co-founder and COO of Adap. Before that, he worked as Director of Engineering at XAIN AG and as the CTO of mojo reads. Since 2013 he has hired and built highly successful international engineering teams in three prior companies to Adap. Taner has a background in Electrical Engineering and is currently a Visiting Researcher at the University of Cambridge where he collaborates in machine learning research.

An introduction to federated learning with Flower

Federated Learning (FL) has emerged as a promising technique to collaboratively learn a shared model over several isolated data silos, all while keeping their training data private, thereby decoupling the ability to do machine learning from the need to store the data in the cloud. However, FL used to be difficult to simulate at scale and also difficult to deploy in production environments. Flower bridges this gap by enabling researchers and engineers to run large-scale simulations before moving their workloads into production, all using the same framework. In this lab, we are going to introduce the general concepts behind FL and their implementation in Flower. The lab does not require any prior knowledge of FL, we will start with an existing “non-federated” ML project and then take all the necessary steps to “federate” it, customize the federated setup, and finally scale it to 1000 clients.

Collaborative Wrap-Up – 12:15-13:15

Comments are closed.