All projects Privacy Protection & CryptographyBlockchains & Smart ContractsSoftware VerificationDevice & System SecurityMachine LearningFinanceHealthGovernment & HumanitarianCritical InfrastructureDigital Information
Oct 2021 → Oct 2023 Project

RuralUS: Ultrasound adapted to resource limited settings

Partner: ICRC, CHUV
Partner contact: Mary-Anne Hartley
EPFL laboratory: Machine Learning and Optimization Laboratory (MLO), intelligent Global Health Research group
EPFL contact: Mary-Anne Hartley

Point-of-Care Ultrasound (PoCUS) is a powerfully versatile and virtually consumable-free clinical tool for the diagnosis and management of a range of diseases. While the promise of this tool in resource-limited settings may seem obvious, it’s implementation is limited by inter-user bias, requiring specific training and standardisation.This makes PoCUS a good candidate for computer-aided interpretation support. Our study proposes the development of a PoCUS training program adapted to resource limited settings and the particular needs of the ICRC.

TopicsMachine LearningHealth

Jul 2021 → Nov 2021 Project

Causal Inference Using Observational Data: A Review of Modern Methods

Partner: armasuisse
Partner contact: Albert Blarer
EPFL laboratory: Chair of Biostatistics
EPFL contact: Prof. Mats J. Stensrud

In this report we consider several real-life scenarios that may provoke causal research questions. As we introduce concepts in causal inference, we reference these case studies and other examples to clarify ideas and provide examples of how researchers are approaching topics using clear causal thinking.

TopicsMachine Learning

May 2021 → May 2023 Project

Harmful Information Against Humanitarian Organizations

Partner: ICRC, funded by HAC
Partner contact: Fabrice Lauper
EPFL laboratory: Distributed Information Systems Laboratory (LSIR)
EPFL contact: Prof. Karl Aberer, Rebekah Overdorf

In this project, we are working with the ICRC to develop technical methods to combat social media-based attacks against humanitarian organizations. We are uncovering how the phenomenon of weaponizing information impacts humanitarian organizations and developing methods to detect and prevent such attacks, primarily via natural language processing and machine learning methods.

TopicsMachine LearningGovernment & Humanitarian

Apr 2021 → Mar 2022 Project

Adversarial Attacks in Natural Language Processing Systems

Partner: Cyber-Defence Campus (armasuisse)
Partner contact: Ljiljana Dolamic
EPFL laboratory: Signal Processing Laboratory (LTS4)
EPFL contact: Prof. Pascal Frossard, Sahar Sadrizadeh

Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various methods have been proposed to craft these examples in image data. However, these methods are not readily applicable to Natural Language Processing (NLP). In this project, we aim to propose methods to generate adversarial examples for NLP models such as neural machine translation models in different languages. Moreover, through adversarial attacks, we mean to analyze the vulnerability and interpretability of these models.

TopicsDevice & System SecurityMachine LearningGovernment & Humanitarian

Mar 2021 → Feb 2022 Project


Partner: armasuisse
Partner contact: Gérôme Bovet
EPFL laboratory: Signal Processing Laboratory (LTS4)
EPFL contact: Prof. Pascal Frossard

Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use adversarial learning, in which the model is fine-tuned with these adversarial perturbations. However, this method has several drawbacks. It is computationally costly, has convergence instabilities and it does not protect against multiple types of corruptions at the same time. The objective of this project is to develop improved and effective adversarial training solutions that tackle these drawbacks.

TopicsDevice & System SecurityMachine Learning

Jan 2021 → Dec 2022 Project

What If....? Pandemic Policy Decision Support System

Partner: Swiss RE
Partner contact: Mary-Anne Hartley
EPFL laboratory: Machine Learning and Optimization Laboratory (MLO), intelligent Global Health Research group
EPFL contact: Mary-Anne Hartley, Prof. Martin Jaggi, Prakhar Gupta, Giorgio Mannarini, Francesco Posa

After 18 months of responding to the COVID-19 pandemic, there is still no agreement on the optimal combination of mitigation strategies. The efficacy and collateral damage of pandemic policies are dependent on constantly evolving viral epidemiology as well as the volatile distribution of socioeconomic and cultural factors. This study proposes a data-driven approach to quantify the efficacy of the type, duration, and stringency of COVID-19 mitigation policies in terms of transmission control and economic loss, personalised to individual countries.

TopicsMachine LearningHealthGovernment & Humanitarian

Dec 2020 → Jun 2021 Project

Distributed Privacy-Preserving Insurance Insight-Sharing Platform

Partner: Swiss Re
Partner contact: Sebastian Eckhardt
EPFL laboratory: Laboratory for Data Security (LDS)
EPFL contact: Prof. Jean-Pierre Hubaux, Juan Troncoso, Romain Bouyé

The collection and analysis of risk data are essential for the insurance-business model. The models for evaluating risk and predicting events that trigger insurance policies are based on knowledge derived from risk data. The purpose of this project is to assess the scalability and flexibility of the software-based secure computing techniques in an insurance benchmarking scenario and to demonstrate the range of analytics capabilities they provide. These techniques offer provable technological guarantees that only authorized users can access the global models (fraud and loss models) based on the data of a network of collaborating organizations. The system relies on a fully distributed architecture without a centralized database, and implements advanced privacy-protection techniques based on multiparty homomorphic encryption, which makes it possible to efficiently compute machine-learning models on encrypted distributed data.

TopicsPrivacy Protection & CryptographyMachine LearningFinance