Projects

All projects Privacy Protection & CryptographyBlockchains & Smart ContractsSoftware VerificationDevice & System SecurityMachine LearningFinanceHealthGovernment & HumanitarianCritical InfrastructureDigital Information
Jan 2022 → Dec 2023 Project
Ongoing

PAIDIT: Private Anonymous Identity for Digital Transfers

Partner: ICRC, funded by HAC
Partner contact: TBD
EPFL laboratory: Decentralized Distributed Systems Laboratory (DEDIS)
EPFL contact: Prof. Bryan Ford

To serve the 80 million forcibly-displaced people around the globe, direct cash assistance is gaining acceptance. ICRC’s beneficiaries often do not have, or do not want, the ATM cards or mobile wallets normally used to spend or withdraw cash digitally, because issuers would subject them to privacy-invasive identity verification and potential screening against sanctions and counterterrorism watchlists. On top of that, existing solutions increase the risk of data leaks or surveillance induced by the many third parties having access to the data generated in the transactions. The proposed research focuses on the identity, account, and wallet management challenges in the design of a humanitarian cryptocurrency or token intended to address the above problems.

TopicsPrivacy Protection & CryptographyBlockchains & Smart ContractsDevice & System SecurityFinanceGovernment & Humanitarian

View
Apr 2021 → Mar 2022 Project
Ongoing

Adversarial Attacks in Natural Language Processing Systems

Partner: Cyber-Defence Campus (armasuisse)
Partner contact: Ljiljana Dolamic
EPFL laboratory: Signal Processing Laboratory (LTS4)
EPFL contact: Prof. Pascal Frossard, Sahar Sadrizadeh

Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various methods have been proposed to craft these examples in image data. However, these methods are not readily applicable to Natural Language Processing (NLP). In this project, we aim to propose methods to generate adversarial examples for NLP models such as neural machine translation models in different languages. Moreover, through adversarial attacks, we mean to analyze the vulnerability and interpretability of these models.

TopicsDevice & System SecurityMachine LearningGovernment & Humanitarian

View
Mar 2021 → Feb 2022 Project
Ongoing

ADAN

Partner: armasuisse
Partner contact: Gérôme Bovet
EPFL laboratory: Signal Processing Laboratory (LTS4)
EPFL contact: Prof. Pascal Frossard

Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use adversarial learning, in which the model is fine-tuned with these adversarial perturbations. However, this method has several drawbacks. It is computationally costly, has convergence instabilities and it does not protect against multiple types of corruptions at the same time. The objective of this project is to develop improved and effective adversarial training solutions that tackle these drawbacks.

TopicsDevice & System SecurityMachine Learning

View