Track B: Robotics and AI

Track B: Robotics and AI

The fields of robotics and industrial AI are rapidly advancing, reshaping existing industries and driving innovation. Track B: “Robotics and Industrial AI,” delves into technologies and the latest breakthroughs that are transforming the landscape of modern production. Participants will explore essential and fundamental topics such as Human-Robot Collaboration, Digital Twins and Asset Administration Shells, Large Action Models and even rescue robotics for disaster relief. Attendees will have the opportunity to engage in hands-on workshops, live laboratory demo-tours and lectures, applying theoretical knowledge to practical scenarios. Join us for a deep dive into the world of Robotics and Industrial AI and discover how these technologies are revolutionizing industries.


Monday, Sept 9, 2024

Opening Speech, 12:45-13:30

Pierre Alliez (
Scientific coordinator of the Inria-DFKI partnership )

Philipp Slusallek (Executive Director of DFKI Saarbrücken)

Keynote 1, 13:30-14:30

Prof. Dr. rer. nat. Dr. h.c. mult. Wolfgang Wahlster (DFKI)

Professor Wolfgang Wahlster is a pioneer of AI in Germany and Europe as a founding director of the DFKI. He has served as an elected President of three international AI organizations: IJCAII, EurAI, and ACL. He is an elected Fellow of AAAI, EurAI, and GI. He laid some of the foundations for multimodal dialog systems, user modelling, and speech-to-speech translation cyber-physical production systems for the fourth industrial revolution (Industrie 4.0), a concept that he coined in 2010. Wahlster is a member of the Nobel Prize Academy in Stockholm, the German National Academy Leopoldina and three other prestigious academies. For his research, he has been awarded the German Future Prize, and the Grand Cross of Merit by the Federal President of Germany. (for more info see: https://www.wolfgang-wahlster.de/)

Industrial AI for Smart Manufacturing

In the next decade of Industry 4.0 a new generation of AI technologies will take smart factories to a new level. Large Language Models (LLMs) will be complemented by Large Process Models (LPMs) and Large Action Models (LAMs), so that generative AI models not only predict what to say or visualize next, but also what to do next with explanations of why these actions make sense.
Although deep learning is the most powerful machine learning method developed to date, it has already reached its inherent limits in many industrial application domains. It must be combined with various symbolic approaches in new system architectures. This leads to hybrid LxM (x=L,P, or M) technologies that use holonic multiagent architectures for combining neural approaches with symbolic reasoning technologies such as constraint solving, physics-based simulation and terminological reasoning in knowledge graphs.


Course 1, 15:00-17:30

Daniel Porta (DFKI)

Dr.-Ing. Daniel Porta received a diploma in Computer Science from Saarland University in 2007 and his doctoral degree in 2017. Joining DFKI’s Cognitive Assistants research department already as a student in 2004, he is now a Senior Researcher leading a research group on industrial AI.

Digital twins for AI-based industrial applications

Digital twins collect information on an asset over its entire life cycle and provide it in a standardised way for a wide range of applications. The course will introduce to digital twin architectures based on Asset Administration Shells and further sound abstraction layers for future-proof Industrie 4.0 infrastructures. It then discusses several industrial use cases in terms of AI-based applications at different life cycle phases.


Tuesday, Sept 10, 2024

Keynote 2, 9:00-10:00

Kevin Baum (DFKI)

Kevin Baum, a computer scientist (M.Sc.) with a doctorate in philosophy, is the head of the Center for European Research in Trusted Artificial Intelligence (CERTAIN) and deputy head of the Neuro-Mechanistic Modeling (NMM) research department at the German Research Center for Artificial Intelligence (DFKI). In various interdisciplinary research projects, he has researched primarily on the sense and nonsense of transparency and explainability requirements for AI systems with regard to societal desiderata such as recognizing unfairness, the effectiveness of human oversight and enabling moral responsibility. He developed the award-winning lecture Ethics for Nerds, is and has been a member of various ethics committees, and is part of Algoright e.V., the interdisciplinary non-profit think tank for good digitalization and science communication.

A provisional keynote on current ethical challenges of AI

“Progress in the field of AI is breathtaking. Large models, foundation models, multimodality: all this not only opens up a wide range of new possibilities, be it in code generation with LLMs or in robotics via Large Action Models, but also raises new societal and ethical challenges. In his provisional keynote, Kevin Baum provides an overview of current normative challenges, sorts out loose threads, and outlines some resulting research opportunities.”

Course 2, 10:30-13:00

Pia Bideau (Inria)

Pia Bideau is a researcher at the THOTH team at Inria Grenoble and holds a junior research chair position for “Perception and Interaction” at MIAI Grenoble Alpes. Before joining Inria in October 2023, Pia Bideau was postdoctoral researcher at the robotics lab at Technical University Berlin and the Excellence Cluster “Science of Intelligence” (SCIoI). She received her PhD from the University of Massachusetts, Amherst, advised by Prof. Erik Learned-Miller. Her thesis proposed novel approaches towards segmenting independently moving objects from noisy optical flow fields.

Pia Bideau has co-organized the workshop “What is motion for?” at ECCV 2022 has engaged in numerous projects focused on research-oriented teaching. In the past she received a best paper award at the ECCV 2018 Workshop: What is optical flow for? for her paper “MoA-Net: Self-Supervised Motion Segmentation” and scholarships from DAAD for academic education abroad and from BMBF for excellent academic achievements during her Master studies.

Navigating Together: Integrating Analytical and Learning-Based Approaches for Distance Estimation

Distance estimation is an essential part of scene recognition and orientation, allowing agents to move in a natural environment. In particular, when humans or animals move in teams, they seem to be capable of doing this – moving together as a whole without colliding or bumping into each other. Different sensor systems but also different strategies of movement enable agents to localize themselves relative to their neighbors or neighboring objects.

This course provides an introduction into analytical and learning based approaches for distance estimation. While there are several cues to extract information about distance the focus of this course lies on object appearance and its relative size. Objects appearing at greater distance will appear smaller than objects nearby. This is one of the fundamental principles of perspective projection. A classical object detector (YOLOv5 small/nano) will be extended with the ability to estimate distance. When does a system benefit from learning? When should estimates be computed following known physical principles instead of being learned from data? This part of the assignment focuses on implementing both solutions to distance estimation – the analytical computation and a multilayer perceptron (MLP). It involves analyzing the advantages and disadvantages of each approach and ultimately deciding which algorithm to deploy on a real robot. If time permits, we will delve into ongoing research addressing the challenges of distance estimation for behavior analysis, specifically focusing on reconstructing speed and 3D trajectories of animals and humans.

Prerequisites: basic python programming skills.

Course 3, 15:30-18:00

Gianluca Rizzello (University of Saarland)

Gianluca Rizzello was born in Taranto, Italy, in 1987. He received the master’s (Hons.) degree in control engineering from Polytechnic University of Bari, Bari, Italy, in 2012. He received his Ph.D. in Information and Communication Technologies from Scuola Interpolitecnica di Dottorato, a joint program between Polytechnic Universities of Torino, Bari, and Milano, Italy, in 2016. After his doctoral studies, he joined Saarland University, Saarbrücken, Germany, first with the role of a postdoc researcher and Group Leader Smart Material Modeling and Control (2016-2019), and subsequently as Assistant Professor in Adaptive Polymer Systems (2020 – present). His research interests include development, modeling, and control of soft robotic and mechatronic systems based on unconventional drive technologies, such as smart materials.

An introduction to Soft Robotics

While traditional robots are essentials in many industrial tasks, they show limitations when performing certain tasks involving safe interaction with humans or exploration of complex unstructured environments. Taking inspiration from animals and other biological systems, the field of soft robotics offers a possible means to develop intelligent machines that can interact with their environment in ways rigid robots cannot. Soft robots benefit from the presence of elastic and soft elements that enhance their adaptability and versatility with respect to their environment, thus allowing to close the gap between traditional rigid robots and biological systems. Integrating soft features into robotic systems, however, involves several challenges in terms of system design, component selection, modeling, and control. This lecture aims at providing a general introduction to the field of soft robotics. The lecture will cover both hardware and software aspects of soft robots, ranging from soft design principles, soft actuators, and soft sensors, to challenges posed by modeling and control of soft robots. For each one of those areas, the main results from the state of the art, major challenges, and research opportunities will be illustrated. The presentation of the topics will be accompanied by several examples from the soft robotic literature.


Wednesday, Sept 11, 2024

Keynote 3, 9:00-10:00

Kai Warsönke (VW)

Kai Warsönke is a graduate engineer (Diplom FH) in Production Engineering and a fourth-year PhD student at Volkswagen, focusing on data-driven product influence in vehicle projects. His research includes stochastic and statistical tolerance simulation models and preparing quality assurance data for the usage of AI methods. He creates and simulates measurement data-coupled tolerance models to propose targeted action plans for improving vehicle quality. His innovative approach integrates advanced simulation techniques to optimize product development and ensure high-quality outcomes in the automotive industry.

Henrik Waschke (VW)

Henrik Waschke holds both a Bachelor and Master’s degree in automotive engineering. Currently, he is a first-year PhD student at Volkswagen. His research focuses on enhancing quality in the automotive sector using 3D-AI technology. Henrik deals with AI systems to optimize customer-relevant quality features and streamline quality planning processes. His work aims to improve the customer-relevant quality features and accelerate quality planning processes.

Increasing product quality in the automotive industry through the Virtual Measurement Data Analysis (VMDA)

The Virtual Measurement Data Analysis (VMDA) has been developed to assess how component deviations in the production process affect the corresponding closure dimension across the entire tolerance chain. VMDA uses the latest measurement data to show and analyze changes in how production-related deviations affect the whole process in real time. So far, the VMDA has given real-time feedback on measurement data to a tolerance analysis model that represents the whole vehicle. Right now, VMDA is used on stationary computers. User feedback has indicated a high level of complexity and the necessity for extensive technical knowledge regarding the interaction of individual assemblies and quality-relevant areas. The next step involves simplifying, refining, and explicitly transferring VMDA functionality to a portable device. This will provide the operator with specific instructions for correcting quality deviations. Subsequently, there will be a discussion on the potential applications of artificial intelligence subfields in optimizing planning processes.

Course 4, 10:30-13:00

Marie-Odile Berger (Inria)

Marie-Odile Berger is INRIA Research Director at the “Centre INRIA de l’Université de Lorraine” and currently heads the TANGRAM Computer Vision Group. Her research interests include computer vision, artificial intelligence with an application focus on augmented reality tasks requiring high-precision localization, both in classical environments and in medical imaging. Her research has led to several theoretical and practical results in the areas of matching and 3D tracking, reconstruction and visual perception. She has published more than 140 papers in conferences and journals. http://members.loria.fr/moberger

AI for computer vision : using high level features for localization

Like many other fields, image-based localization methods have benefited greatly from the emergence of convolutional networks (CNN). In this course, after describing the main principles of methods using AI for localization, we will focus on methods based on high-level features derived from CNN. In particular, we’ll look at methods that use objects detected in images as landmarks for localization. Such methods, based on the use of a generic object detector, have many advantages: they avoid systematic re-training of algorithms for new scenes, do not require a precise model of the scene and have very good accuracy.


Joint Lunch and Start-Up Workshop

Can I really be an entrepreneur? Answer: We don’t know, you have to try!
Join the Inria Startup Studio and the RETRAS project for an experience-sharing session on diving into entrepreneurship.

Entrepreuneurship for Scientists-Opportunities through knowledge transfer – Dr. Mara Schuler-Bermann- Start up Coach, Technology Transfer Triathlon


Course 5, 15:30-18:00

Melya Boukheddimi (DFKI)

Since March 2021, I have been working as a post-doc in Robotics at the DFKI – RIC Bremen. I am involved in and co-lead the Mechanics & Control research group. My research focuses on agile robots, mainly humanoid robots, and how to push their limits to generate highly dynamic, anthropomorphic, and precise motions. Before joining DFKI, I completed my PhD in Robotics with the Gepetto team at the LAAS-CNRS laboratory in Toulouse in 2020. Prior to that, I obtained my master’s degree in Robotics and Mobility Assistance from Paris-Saclay University in 2016.

Malte Wirkus (DFKI)

Malte Wirkus received his Diploma in Computer Science at the University of Bremen in 2010. He joined the Robotics Innovation Center (RIC) of the German Research Center for Artificial Intelligence (DFKI GmbH) in 2010. In different research and industry projects, he gained experiences in the fields of robotic mobile manipulation, multi-agent architectures and human-robot collaboration. With his current scientific research interest in control architectures and frameworks for robotic applications, he works as researcher, project and team leader at DFKI-RIC.

Smart and Dynamic Robots – How do robots become smart and agile

In this session, we will first give a general introduction to the question of how robots become intelligent. The many different aspects and technologies that are necessary to make robots intelligent will be discussed. We will then delve into the field of agile robots and how to push their limits, evaluating and improving their design and control strategies to generate highly dynamic, anthropomorphic, and precise movements.


Thursday, Sept 12, 2024

Keynote 4, 9:00-10:00

Xavier Hinaut (INRIA)

Xavier Hinaut is Research Scientist in Bio-inspired Machine Learning and Computational Neuroscience at Inria, Bordeaux, France since 2016. He received a MSc and Engineering degree form Compiègne Technology University (UTC), FR in 2008, a MSc in Cognitive Science & AI from EPHE, FR in 2019, then his PhD of Lyon University, FR in 2013. He is a member (Vice Chair) of IEEE CIS Task Force on Reservoir Computing. His work is at the frontier of neurosciences, machine learning, robotics and linguistics: from the modeling of human sentence processing to the analysis of birdsongs and their neural correlates. He both uses reservoirs for machine learning (e.g. birdsong classification) and models (e.g. sensorimotor models of how birds learn to sing). He manages the “DeepPool” ANR project on human sentence modeling with Deep Reservoirs architectures and the Inria Exploratory Action “BrainGPT” on Reservoir Transformers. He leads ReservoirPy development: the most up-to-date Python library for Reservoir Computing. https://github.com/reservoirpy/reservoirpy He is also involved in public outreach, notably by organising hackathons from which fun projects with reservoirs came out (ReMi project on reservoir generating MIDI and sounds).

Tailoring Transformers into Cognitive Language Models

Language involves several levels of abstraction, from small sound units like phonemes to contextual sentence-level understanding. Large Language Models (LLMs) have shown an impressive ability to predict human brain recordings. For instance, while a subject is listening to a book chapter from Harry Potter, LLMs can predict parts of brain imaging activity (recorded by functional Magnetic Resonance Imaging or Electroencephalography) at the phoneme or word level. These striking results are likely due to their hierarchical architectures and massive training data. Despite these feats, they differ significantly from how our brains work and provide little insight into the brain’s language processing. We will see how simple Recurrent Neural Networks like Reservoir Computing can model language acquisition from limited and ambiguous contextual data better than LSTMs. From these results, in the BrainGPT project, we explore various architectures inspired by both reservoirs and LLMs, combining random projections and attention mechanisms to build models that can be trained faster with less data and greater biological insight.

RICAIP – Day at ZEMA Saarbrücken

Hands-On Robotic-Course

Xiaomei Xu (ZeMA)

Xiaomei Xu joined ZeMA – Zentrum für Mechatronik und Automatisierungstechnik gemeinnützige GmbH as a research assistant in the Robotics group in spring 2020 after graduating from RWTH Aachen University. She focuses on developing mathematical algorithms for 3D cameras and robotics applications. Xiaomei’s work has been presented at conferences such as the CIRP Web Conference 2020, IEEE CASE 2021&2024, IEEE ICSC 2022, CIRP CATS 2024, and CIRP ICME 2024.

Multi-Robot Simulation using Siemens Tecnomatix

In the 90-minute “Hands-on Robotik-Kurs”, we divide the group activity into two segments. The first segment, lasting 30 minutes, provides a Quick Guide for Siemens Tecnomatix (Software User Interface and basic function). This guide explains how to build a virtual environment for robotic simulation applications in welding, deburring, and painting. The second segment, lasting 60 minutes, involves robot path planning based on the manufacturing geometry for welding, polishing and painting processes.

Robotics-Lab Tour

Tim Schwartz (DFKI)

Dr.-Ing. Tim Schwartz studied computer science and computational linguistics at Saarland University. He received his PhD with the thesis “The Always Best Positioned Paradigm for Mobile Indoor Applications” in 2012. Since 2016, he is the head of the human-robot communications group and leads the German-Czech Innovation Lab for Human-Robot Collaboration MRK 4.0.

Practical Tour through the German-Czech Innovation Lab for Human-Robot Collaboration in Industrie 4.0 (MRK 4.0 Lab)

In this tour, we will show you around in our german-Czech Innovation Lab for Human-Robot Collaboration in Industrie 4.0, or MRK 4.0 Lab for short. As the name implies, we focus on Human-Robot Collaboration. In extension we also deal with human-robot communication, the orchestration of hybrid teams (i.e. teams consisting of humans, robots and software agents) and practical applications of Industrial AI, Asset Administration Shells, Digital Twins and general Industrie 4.0 topics. Human-Robot communication is not necessarily limited to spoken or written language, but includes all sorts of modalities: from more traditional control units, over manual teach-in to Augmented and Virtual Reality etc. We will show you practical examples from different projects, we are currently working on or have been working on in the past, encouraging questions and discussions throughout the whole experience. 

Course 7

Martin Suda (CIIRC)

Martin Suda is a senior researcher at the CIIRC institute of the Czech Technical University in Prague and the head of the Automated Reasoning Group there. He is also a part-time research scientist at Filuta.ai. His primary research interest is automated theorem proving and how it can be boosted through the techniques of machine earning. He is one of the main developers of the award-winning automatic theorem prover Vampire.

Powering Logic-Based Reasoning with Machine Learning and Vice Versa?

We will provide an overview of the state-of-the-art technology in logic-based reasoning, ranging from propositional satisfiability and satisfibility modulo theories to automatic and interactive theorem proving. The corresponding tools, often referred to as “solvers”, find many applications in areas such as hardware verification (chip design), software verification (program correctness) or automation of math. We will also discuss how machine learning and, in particular, neural networks enter the picture of the development of such solvers, able to automatically help discover new guidance heurstics, so necessary for fighting the inherent combinatorial explosion. Finally, we will also contemplate the opposite direction for field synergy: Couldn’t logic-based tools help us eliminate errors from neural networks’ outputs, most notably guard as against LLM hallucinations?


Friday, Sept 13, 2024

Course 8, 9:00-11:30

Marc Tabie (DFKI)

Marc Tabie joined the Robotics Innovation Center of the DFKI in Bremen almost 17 years ago as an undergraduate student. He did his B.Sc. and M.Sc. in Systems Engineering at the University of Bremen, specialized in the field of robotics. His research in the field of biosignal-processing, focuses mainly on EMG- and EEG-processing for human robot interaction especially for exoskeletons with the purpose of stroke rehabilitation.

Maurice Rekrut (DFKI)

Maurice Rekrut is a Senior Researcher at the German Research Center for Artificial Intelligence (DFKI). In 2023 he received his PhD in Computer Science under the supervision of Prof. Dr. Antonio Krüger for the thesis “Leveraging EEG-based Speech Imagery Brain-Computer Interfaces”. Since 2020 he is the head of Cognitive Assistants BCI-Lab which focuses on the application of Brain-Computer Interfaces (BCIs) in real-world scenarios. He is involved in several national und international research projects concerning this topic as for example EXPECT, BISON, NEARBY or HAIKU.

Variabilities in Brain-Computer Interfaces – Towards applying BCIs in real-world applications

Whereas Brain-Computer Interfaces (BCIs) are promising for many applications, e.g. human-robot-interaction, they are not reliable. Their reliability degrades even more across users or when used across contexts (e.g., across days or for changing users’ states) due to various sources of variabilities. Unfortunately, such variabilities are 1) often ignored in the literature, as most BCIs are assessed in a single context, for a single day, and with user-specific designs, and 2) poorly understood. Thus, for BCIs to fulfil their promises and to be used in practice outside laboratories, we need to make them robust to such variabilities. This workshop aims at presenting applications for BCIs in Human-Robot-Interaction along different research projects, teaching the basics of variabilities for different BCI types, present potential methods to address them, and discusses challenges regarding the application of BCIs in laboratory settings and their transfer to real-world scenarios.


Scientific chair

Track B

Tim Schwartz

Comments are closed.