Posters and demos

Posters and demos

Tue, July 20, 2021 – 17:30-19:00 CET

1. Riccardo Finotello (CEA Paris-Saclay)

Computer vision and algebraic geometry: AI for theoretical physics

We rephrase a central problem in algebraic geometry and theoretical physics as a computer vision task. Leveraging knowledge of the physical data with the introduction of architectures inspired by Google’s Inception network, we reach almost perfect accuracy on the predictions. We thus prove the versatility of the AI models and their reliability in making accurate physical predictions.
2. Evi Sijben (CWI)

Causal Shapley values: Exploiting causal knowledge to explain individual predictions of complex models

Shapley values underlie one of the most popular model-agnostic methods within explainable artificial intelligence. These values are designed to attribute the difference between a model’s prediction and an average baseline to the different features used as input to the model. Shapley values are well calibrated to a user’s intuition when features are independent, but may lead to undesirable, counterintuitive explanations when the independence assumption is violated. By introducing causal Shapley values we aim to circumvent the independence assumption.
3. Duy Nguyễn Hồ Minh (DFKI)

An attention mechanism with multiple knowledge sources for COVID-19 detection from CT images

Until now, Coronavirus SARS-CoV-2 has caused more than 850,000 deaths and infected more than 27 million individuals in over 120 countries. Besides principal polymerase chain reaction (PCR) tests, automatically identifying positive samples based on computed tomography (CT) scans can present a promising option in the early diagnosis of COVID-19. Recently, there have been increasing efforts to utilize deep networks for COVID-19 diagnosis based on CT scans. While these approaches mostly focus on introducing novel architectures, transfer learning techniques, or constructing large-scale data, we propose a novel strategy to improve the performance of several baselines by leveraging multiple useful information sources relevant to doctors’ judgments. Specifically, infected regions and heat maps extracted from learned networks are integrated with the global image via an attention mechanism during the learning process. This procedure not only makes our system more robust to noise but also guides the network focusing on local lesion areas. Extensive experiments illustrate the superior performance of our approach compared to recent baselines. Furthermore, our learned network guidance presents an explainable feature to doctors as we can understand the connection between input and output in a grey-box model.
4. Christina Cociancig (DFKI & Universität Bremen)

Modeling for explainability: Ethical decision-making in automated resource allocation

We provide a recommendation for transparent and explainable algorithmic decision-making by formal modeling with decision trees, including associated entropy and information gain values, as well as bisimulation. In the interest of examining a contemporary decision process, as a case study, we compare approaches to the decision-processes of triage in two countries: Germany and Austria.
5. Tiago Gonçalves (University of Porto & INESC TEC)

Can we show that post-model methods generate misleading explanations?

Following recent approaches that showed that it is possible to fool post-model explanations using adversarial attacks, we performed preliminary experiments with deep neural networks to assess if these explanations are misleading when we train the model using a biased dataset.
6. Victor Guyomard (Inria & Orange Labs)

Post-hoc counterfactuals generation with supervised autoencoder

Generating insightful counterfactual explanations is of particular interest for supporting decision-makers, but generating realistic and useful counterfactuals remains a challenge. We propose a method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes. These class prototypes are obtained using a supervised autoencoder. We evaluate the local interpretability at the instance level with various interpretability metrics on several data sets and compare to state-of-the-art algorithms.
7. Rūta Binkytė-Sadauskienė (Inria & Ecole Polytechnique)

Unfair world fair decisions: Correcting inequality with integrative preprocessing method for fair AI

In case when decisions based on available training data cannot be trusted due to real world bias, incorporating knowledge of relevant social processes can help to achieve decisions that are both accurate and fair. As an illustrating example, we consider a scenario in which tests are applied to measure true merit or intelligence, but the resulting scores are influenced by wealth factor, such as allowing to hire tutors or retake the tests. In our research we make use of a probabilistic model and iterative algorithm in order to remove the influence of bias from wealth factor and reconstruct the probability distribution of unobservable true merit.
8. Alexandre Heuillet (Université Paris-Saclay)

Collective eXplainable AI: Explaining cooperative strategies and agent contribution in multiagent reinforcement learning with Shapley values

As Reinforcement Learning becomes ubiquitous and used in critical and general public applications, it is essential to develop methods to make it more interpretable. In this study, we propose a novel approah to explain cooperative strategies in multiagent RL using Shapley values, a game theory concept that successfully explains some Machine Learning algorithms. We argue that Shapley values are a pertinent way to evaluate the contribution of players in a cooperative multi-agent RL context. To palliate the high overhead of this method, we approximate Shapley values using Monte Carlo sampling and evaluate this method on two cooperation-centered and socially challenging multi-agent environments (Multiagent Particle and Sequential Social Dilemmas). We show that Shapley values succeed at estimating the contribution of each agent.
9. Hali Lindsay (DFKI)

Dissociating semantic and phonemic search strategies in the phonemic verbal fluency task in early dementia

Effective management of dementia hinges on timely detection and precise diagnosis of the underlying cause of the syndrome at an early mild cognitive impairment (MCI) stage. Verbal fluency tasks are among the most often applied tests for early dementia detection due to their efficiency and ease of use. We show that by applying state-of-the-art semantic and phonemic distance metrics in automatic analysis of phonemic verbal fluency productions in-depth conclusions about underlying cognition and improved machine learning performance are possible. (Also presented at CLPsych at NAACL 2021)
10. Roshan Rane (Charité Universitätsmedizin)

Using machine learning to predict problematic alcohol use in adolescents from structural MRI

Problematic alcohol use during adolescence has been associated with developing alcohol use disorder (alcohol addiction) later in life. In this study, multivariate decoding analysis is performed to identify structural patterns in adolescent brains that are predictive of alcohol misuse during this period. We systematically evaluate 4 machine learning models on different measures of alcohol misuse and derive insights about which regions of the brain are affected by alcohol abuse.
11. Gaetan Vignoud (Inria)

Movement disorders analysis using a deep learning approach: Hand bradykinesia in Parkinson's disease

Using Deep Learning tools for hand keypoints detection, we aim at computing objective scores on Parkinsonian patients associated movement disorders. Thanks to a large database of videos from Pr. Bertrand Degos at Avicenne University Hospital, we are able to compare our metrics with medical quotations and observe a good correlation.
12. Fabien Girka (Université Paris-Saclay)

Tensor generalized canonical correlation analysis

Tensor Generalized Canonical Correlation Analysis (TGCCA) is an extension of various multivariate analysis methods, including the well-known Canonical Correlation Analysis. It aims to reveal the linear relationships between different sets of measurements that may have intrinsically a tensor structure.
13. Mahbub Ul Alam (Stockholm University)

Federated semi-supervised learning and transfer learning-based multi-task approach to detect Covid-19 and lung-segmentation from chest radiography

In this work we tried to implement a classification system using federated semi-supervised learning and transfer learning based approaches to detect Covid-19 and lung-segmentation from chest radiography. We used raspberry-pi devices to provide an internet-of-medical-things-based implementation along with the simulated results using a server. The results showed that this approach could be helpful when there is a shortage of labeled data.
14. Maxx Richard Rahman (DFKI)

AI-based approach for the detection of Erythropoietin (EPO) in blood

Sports officials worldwide are facing incredible challenges due to the unfair means of practices performed by the athletes to improve their performance in the game. It includes the intake of hormonal based drugs like erythropoietin (EPO) to increase their strength. The current laboratory-based method of detecting such cases is limited because of the cost, availability of medical experts and various other factors. We have developed an AI-based framework to detect EPO in blood samples by using multivariate analysis and deep learning algorithms.
15. Victoria Bourgeais (Université Paris-Saclay)

Self-explainable knowledge-based deep learning models for phenotype prediction from gene expression data

Existing deep learning models are usually considered as black-boxes that provide accurate predictions but are not interpretable. However, accuracy and interpretation are both essential for precision medicine. In addition, most models do not integrate the knowledge of the domain. Hence, making deep learning models interpretable for medical applications using prior biological knowledge will be the main focus of this talk.
16. Ivan Zakazov (Skoltech & Philips Research)

Anatomy of domain shift in MRI segmentation

Domain Adaptation (DA) methods are used to tackle the problem of differently distributed train (source) and test (target) data (e.g., produced by various MRI apparatus). We consider the supervised DA task and propose SpotTUnet – a CNN architecture for task-adaptive supervised DA in medical image segmentation (based on SpotTune). Also, we introduce a regularization for the cases of the extreme target data scarcity and draw insights from the fine-tuning policy learned.

Thu, July 22, 2021 – 17:30-19:00 CET

1. Amr Gomaa (DFKI)

Personalized multimodal fusion approach for referencing objects from moving vehicles

With the recent exponentially increasing capabilities of modern vehicles, novel approaches for interaction emerged the goes beyond traditional touch-based and voice commands approaches. In this work, we investigate a novel pointing and gaze multimodal fusion personalized approach for referencing surrounding objects while maintaining a long driving route in a simulated environment.
2. Oliver Mey (Fraunhofer IIS)

Self-masking adversarial networks

Self-masking adversarial networks are a neural network architecture and technique for local interpretability that can be used to extract the classification-relevant parts of an input dataset. The working principle of the method is explained and results of the method are demonstrated on different sample datasets.
3. Paweł Guzewicz (Inria & Ecole Polytechnique)

Expressive and efficient analytics for RDF graphs

Nowadays, public open data become steadily more available, frequently embracing more heterogeneous data formats, such as RDF, with numerous RDF graph datasets. Such data often contain interesting insights in the form of RDF graph aggregates that journalists can use as newspaper article ideas or leads in the process called Computational Lead Finding. In this work, we show how to efficiently and automatically find the top-k most interesting aggregates in an RDF graph.
4. Marine Collery (Inria & IBM France)

Learning globally understandable models with rules

Global understanding of ML decision models is crucial in many environment. Rule systems are fully understandable, executable, transparent, and business user friendly. However, today’s rule learning algorithms produce rules with low expressivity…
5. Isabel Rio-Torto de Oliveira (INESC TEC)

Explainable classification through natural language

Being able to justify the decisions of deep learning models is rapidly becoming a mandatory requirement for deploying these systems in the real-world, especially in medical and other high-stakes decision areas. Furthermore, the state-of-the-art focuses mainly on post-hoc visual interpretability. However, we argue that is it advantageous to leverage the complementarity of different explanation modalities, by exploring the generation of natural language explanations in an in-model fashion, integrating the generated text directly into the classification path.
6. Claudio Lazo (TNO)

VISION: Working towards a European roadmap for AI excellence & trust

To make Europe the AI powerhouse as desired, we first need synergy and cooperation between the EU’s rich landscape of AI research & innovation communities. A great roadmap and common vision can make a huge difference, and VISION is the H2020-funded project that will achieve this in the following years. Learn about our approach, find out who are involved, and contribute to the European AI strategy!
7. Rui Zhao (University of Edinburgh)

Dr.Aid: a formal framework to support data-use policy compliance for decentralized collaboration

Data sharing and data processing are a common practice across various domains, but the handling of data governance rules (aka. data-use policies) remains manual. Existing research on related topics fall short in supporting general obligations and/or multi-institutional multi-input-multi-output (MIMO) data processing graphs. We present a formal language and constructed the Dr.Aid framework addressing these issues.
8. Pierre-Yves Lagrave (Thales Research and Technology)

Trusted AI with Lie-group based equivariant neural networks

Neural Networks are generically sensitive to geometrical transforms of their inputs, hence motivating the need for increasing their robustness with respect to the action of corresponding Lie groups. Building on the success of CNN, Group-Convolutional Neural Networks (G-CNN) have been introduced as an alternative to the data augmentation technique by leveraging on group-based equivariant convolution operators and are achieving state-of-the-art accuracies for a wide range of applications. G-CNN relying on compact groups are well covered by the literature and we will focus here on one challenging non-compact case by building SU(1,1) equivariant neural networks operating on the hyperbolic space, with an application to robust radar-Doppler signal classification.
9. Murali Manohar Kondragunta (Gramener)

Controlling hate speech by tweaking neural networks

Identifying and suppressing different subnetworks that activate hate speech has many use-cases. Our work deals with probing different pre-trained models for such subnetworks. We also check if deactivating such subnetworks lead to any performance trade-offs.
10. Frederic Jonske (IKIM)

Automated classification of external DICOM studies

External patients’ imaging studies often adhere to different naming and structural standards than the local one, making automated filing into the local database an error-prone process. The MOMO algorithm attempts to alleviate such difficulties using a prediction algorithm based on metadata and a Convolutional Neural Network.
11. Mathieu de Langlard (Inria)

Characterization of the 3D human liver micro-architecture using image analysis

The human liver is divided into functional unit called lobules which are mainly composed of cells, blood and bile vessels networks. The aim of this presentation is to provide a complete image acquisition and analysis methodology to reconstruct a 3D human lobule micro-architecture and estimate its 3D morphological properties.
12. Noémie Moreau (Université de Nantes & Keosys)

Comparison between threshold-based and deep-learning-based bone segmentation on whole body CT images

Bone segmentation can be used to evaluate metastatic tumor burden in breast cancer. This work compares the results of three bone segmentation methods : one threshold based and two deep learning based with a Cross Entropy/Dice loss and Hausdorff Distance/Dice loss. A dice score of 0.96, 0.99, 0.98 respectively for each method but with a better visual results for the Hausdorff/Dice method.
13. Hassan Saber (Inria)

Routine bandits: Minimizing regret on recurring problems

We study a variant of the multi-armed bandit problem in which a learner faces every day one of many bandit instances, and call it a routine bandit. More specifically, at each period h in [1,H], the same bandit is considered during T > 1 consecutive time steps, but its identity is unknown to the learner.
14. Tommaso Di Noto (University of Lausanne)

Anatomically-informed detection of cerebral aneurysms in TOF-MRA

The task of aneurysm detection is spatially constrained by the vascular anatomy of the brain. We leverage this information to build an anatomically-informed deep learning network. Specifically, we focus the attention of the model on the areas where aneurysm occurrence is most frequent.
15. Asma Bensalah (Universitat Autònoma de Barcelona)

Towards stroke patients’ upper-limb automatic motor assessment using smartwatches

The rehabilitation stage is crucial for post stroke motor disabilities recovery. Hence, there is a need for continuous monitoring via non-invasive technology. In this work, smartwatches are used to provide patients’ monitoring, a shallow deep learning model and SVMs are used as a classification baseline.
16. Matthis Maillard (Télécom Paris – Institut Polytechnique de Paris)

Knowledge distillation from multi-modal to mono-modal segmentation networks

The fusion of information from different modalities has demonstrated to improve the segmentation accuracy, with respect to mono-modal segmentations, in several applications. However, acquiring multiple modalities is usually not possible in a clinical setting due to a limited number of physicians and scanners, and to limit costs and scan time. Most of the time, only one modality is acquired. In this poster, we present a framework to transfer knowledge from a trained multi-modal network (teacher) to a mono-modal one (student).
17. Xenia Klinge (DFKI)

Interactive Machine Learning pipeline powered by streamlit/Python (demo)

Getting started with Machine Learning today is easier than ever, thanks to an increasing amount of accessible tools. However, for non-experts as you would often meet in projects related to health and medicine, it remains a challenge to get to know the concepts, let alone run their own experiments. Our ML pipeline served in a browser through the Python libarry streamlit tries to make data exploration and prediction both manageable and understandable to our medical partners.

Comments are closed.