Projects

All projects Privacy Protection & CryptographyBlockchains & Smart ContractsSoftware VerificationDevice & System SecurityMachine LearningFinanceHealthGovernment & HumanitarianCritical InfrastructureDigital Information
Apr 2022 → Mar 2023 Project
Ongoing

Adversarial Attacks in Neural Machine Translation Systems

Partner: Cyber-Defence Campus (armasuisse)
Partner contact: Ljiljana Dolamic
EPFL laboratory: Signal Processing Laboratory (LTS4)
EPFL contact: Prof. Pascal Frossard

Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various methods have been proposed to craft these examples in image data. However, these methods are not readily applicable to Natural Language Processing (NLP). In this project, we aim to propose methods to generate adversarial examples for NLP models such as neural machine translation models in different languages. Moreover, through adversarial attacks, we mean to analyze the vulnerability and interpretability of these models.

TopicsDevice & System SecurityMachine Learning

View
Mar 2022 → Feb 2023 Project
Ongoing

ARNO

Partner: armasuisse
Partner contact: Gérôme Bovet
EPFL laboratory: Signal Processing Laboratory (LTS4)
EPFL contact: Prof. Pascal Frossard

State-of-the-art architectures for modulation recognition are typically based on deep learning models. However, recently these models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose serious questions in terms of safety, security, or performance guarantees at large. While adversarial training can improve the robustness of the network, there is still a large gap between the performance of the model against clean and perturbed samples. Based on recent experiments, the data used during training could be an important factor in the susceptibility of the models. Thus, the objective of this project is to research the effects of proper data selection, cleaning and preprocessing of the samples used during training on robustness.

TopicsDevice & System SecurityMachine Learning

View
Mar 2022 → Nov 2022 Project
Ongoing

Technology Monitoring and Management (TMM)

Partner: armasuisse
Partner contact: Alain Mermoud
EPFL laboratory: Distributed Information Systems Laboratory (LSIR)
EPFL contact: Prof. Karl Aberer, Angelika Romanou

The objective of the TMM project is to identify, at an early stage, the risks associated with new technologies and develop solutions to ward off such threats. It also aims to assess existing products and applications to pinpoint vulnerabilities. In that process, artificial intelligence and machine learning will play an important part. The main goal of this project is to automatically identify technology offerings of Swiss companies especially in the cyber security domain. This also includes identifying key stakeholders in these companies, possible patents, published scientific papers.

TopicsMachine Learning

View
Jan 2022 → Dec 2023 Project
Ongoing

Invariant Federated Learning: Decentralized Training of Robust Privacy-Preserving Models

Partner: Microsoft
Partner contact: Dimitrios Dimitriadis, Emre Kıcıman, Robert Sim, Shruti Tople
EPFL laboratory: Data Science Lab (dlab)
EPFL contact: Prof. Robert West, Valentin Hartmann, Maxime Peyrard

As machine learning (ML) models are becoming more complex, there has been a growing interest in making use of decentrally generated data (e.g., from smartphones) and in pooling data from many actors. At the same time, however, privacy concerns about organizations collecting data have risen. As an additional challenge, decentrally generated data is often highly heterogeneous, thus breaking assumptions needed by standard ML models. Here, we propose to “kill two birds with one stone” by developing Invariant Federated Learning, a framework for training ML models without directly collecting data, while not only being robust to, but even benefiting from, heterogeneous data.

TopicsMachine Learning

View
Oct 2021 → Oct 2023 Project
Ongoing

RuralUS: Ultrasound adapted to resource limited settings

Partner: ICRC, CHUV
Partner contact: Mary-Anne Hartley
EPFL laboratory: Machine Learning and Optimization Laboratory (MLO), intelligent Global Health Research group
EPFL contact: Prof. Martin Jaggi, Mary-Anne Hartley

Point-of-Care Ultrasound (PoCUS) is a powerfully versatile and virtually consumable-free clinical tool for the diagnosis and management of a range of diseases. While the promise of this tool in resource-limited settings may seem obvious, it’s implementation is limited by inter-user bias, requiring specific training and standardisation.This makes PoCUS a good candidate for computer-aided interpretation support. Our study proposes the development of a PoCUS training program adapted to resource limited settings and the particular needs of the ICRC.

TopicsMachine LearningHealth

View
Jul 2021 → Nov 2021 Project

Causal Inference Using Observational Data: A Review of Modern Methods

Partner: armasuisse
Partner contact: Albert Blarer
EPFL laboratory: Chair of Biostatistics
EPFL contact: Prof. Mats J. Stensrud

In this report we consider several real-life scenarios that may provoke causal research questions. As we introduce concepts in causal inference, we reference these case studies and other examples to clarify ideas and provide examples of how researchers are approaching topics using clear causal thinking.

TopicsMachine Learning

View
May 2021 → May 2023 Project
Ongoing

Harmful Information Against Humanitarian Organizations

Partner: ICRC, funded by HAC
Partner contact: Fabrice Lauper
EPFL laboratory: Distributed Information Systems Laboratory (LSIR)
EPFL contact: Prof. Karl Aberer, Rebekah Overdorf

In this project, we are working with the ICRC to develop technical methods to combat social media-based attacks against humanitarian organizations. We are uncovering how the phenomenon of weaponizing information impacts humanitarian organizations and developing methods to detect and prevent such attacks, primarily via natural language processing and machine learning methods.

TopicsMachine LearningGovernment & Humanitarian

View
124